Архив на: admin

O’Neill’s Dark Energy at BAM

Richard TermineMatthew Beard as Edmund Tyrone, Jeremy Irons as James Tyrone, Rory Keenan as Jamie Tyrone, and Lesley Manville as Mary Tyrone in Eugene O’Neill’s Long Day’s Journey Into Night at the Brooklyn Academy of Music, 2018

For the first stretch of the opening-night performance of Richard Eyre’s Bristol Old Vic production of Long Day’s Journey into Night at BAM, I had the uneasy sense that things might have been jumpstarted at too rapid a clip. The impression may have owed something to the annoying habit indulged by some theatergoers of applauding celebrity actors (in this case, Jeremy Irons and Lesley Manville) when they make their first entrance, even if it means drowning out the opening line of a most delicately calibrated play. Irons retrieved matters by quickly repeating the line, but there did seem something a little wrong-footed in the way the dialogue between the two raced along from then on, not allowing for even a hint of apparent domestic calm before things started spinning out of control.

There is, after all, only one moment of rest in the play: before anything has happened. The initial moment—when James Tyrone (Irons), using all his actorly charm to compliment his wife Mary (Manville) on what “a fine armful” she has become while she was away, can still believe, or pretend to believe, that things are back to normal, and that Mary has been cured of her drug habit at the sanatorium she has just returned from—is so brief as barely to register. The façade will begin promptly to erode—nothing is brought to our attention except as it falls apart—and when their sons Jamie (Rory Keenan) and Edmund (Matthew Beard) come into the room they will join the process by which every form of reassurance will be chipped away by evasion or contradiction or direct attack. Here at BAM, that process kicked off at a headlong pitch that at first felt uneasily rushed.

Ultimately, as the wider arc of Lesley Manville’s performance became apparent, the unease made sense. The rattled breathlessness of her delivery, as if half a second’s interruption would bring everything crashing down, established the state of things in the Tyrone household with no delay: the masks are already off. Manville’s Mary is not merely distracted but positively a junkie with screaming nerves, turning her head from side to side almost spasmodically, not knowing what to say or do from one second to the next, her words tearing along like a runaway train. Her speediness pulls the rest of them along, struggling to keep pace with her and revealing at once that none of them is in control.

Richard TermineLesley Manville as Mary in Long Day’s Journey Into Night at the Brooklyn Academy of Music, 2018

Even more than usual, in this production Mary is the center around which the rest move in denial, sometimes pausing to confront her in brief outbursts of anger or pleading, or turning away as if in the hope she might disappear. The extent to which these three men live in fear of her is manifest in the cowering dread with which, in the last act, they listen to her footsteps pacing in the upstairs bedroom. Mary inherits the ultimate curse of solitude—she is the only one ever alone on stage, and even speaks a few lines of soliloquy—and Manville comes into her own triumphantly in the final two acts, as Mary becomes the creator of her own theater of memory, a play within a play for which she becomes all the characters and is the only audience. In the heart of solitary delusion, she becomes the being she truly is, the being that flakes off into fragments in her dealings with others.

Tempo is crucial since O’Neill is so essentially a musical writer. Robert Falls, who directed a bracing production of The Iceman Cometh at BAM three years ago, has remarked of him: “He’s writing a score.” If Iceman is an orchestral work for some twenty voices, rising often into busy ensemble passages, Long Day’s Journey into Night is his supreme chamber piece. The four chief instruments—James Tyrone and Mary and Jamie and Edmund—are sharply differentiated whether sparring in duets or quartets or launching into extended solos. (The fifth voice, that of their maid Cathleen, played by Jessica Regan, is injected into the middle of the play as a brief tonal respite, comic and oblivious, to break up an otherwise inexorably gathering heaviness.)

To think of these characters as instruments rather than agents goes to the heart of the play. For all the reiterated talk of “willpower”—specifically with regard to Mary’s morphine addiction, yet pointing also to the mens’ habitual drunkenness, Tyrone’s obsessive parsimony, Jamie’s self-lacerating pessimism—everything shows them deprived of any real liberty, even enough liberty to keep from saying the same things uselessly, again and again, in conversations that connect only fitfully before subsiding into postures of resentment and hopelessness. The family dysfunction of which Long Day’s Journey is the classic portrait is embodied in a music of violent stasis, in which forces of attraction and repulsion toggle perpetually back and forth. However isolated each voice, the echoes of the others are always hanging in the air around it, all of them inextricably tied together no matter how stubbornly they tug to pull free.

In Eyre’s production, the protagonists circle continually around one another, occasionally lurching into violent contact, sometimes attempting affectionate overtures that are quickly curtailed. Early on, there is a good deal of energetically overlapping dialogue, and it is only gradually that each of the actors emerges fully. Irons is a lean Tyrone tightly wound within himself, with only a hint of the grandiloquence of the theatrical idol. (Irons certainly brings a persuasive note to his portrayal of a popular star who has reached the stage of burnt-out reflectiveness.) Even his flare-ups of patriarchal wrath when egged on by Jamie or Edmund are half-hearted, scenes, one senses, that have been played many times too often. If Mary is a force of chaos set loose in the household, the senior Tyrone is the principle of order reduced to a melancholy but formally correct stance.

Rory Keenan’s Jamie has the right mix of wiseguy humor and bitter contempt, and in his final drunk tirade manages to strip away any trace of empathetic feeling; while Matthew Beard, as Edmund, more than holds his own in a part that can sometimes seem the play’s weakest link. As a stand-in for O’Neill, the consumptive Edmund hovers on the periphery, more observer than participant. The others figure as what they irrevocably are, but he is not yet realized, and thus not altogether doomed. As Beard plays him, Edmund is very much the writer in embryo, restlessly roving around the room, never without the safety valve of a book in hand. In his cups, he turns his speeches into literary sketches. The monologue about his transcendent experience at sea of seeing “the veil of things as they seem drawn back by an unseen hand” can seem endless and mawkish, but here it plays as an ambitious young poet self-consciously trying out his powers of invention, showing off for his father, half-exhilarated and half-depressed at the results.

Narrative is of little account in Long Day’s Journey; the tale is almost blurted out in order to get at what matters. This is, after all, as close as O’Neill could have gotten to putting his early life on stage. The painful secrets revealed are his own. It is not a play about family, but family fully realized as a play. The underlying rhythm is of a ritual whose phases are preordained, a ritual of progressive and exhausting exposure. What has been set in motion must continue long past the point of any reasonable hope—in fact, to a point of bone-weariness—and yet the production radiates an energy close to ecstasy: a sorrow not enervating but vital. It is a work that mercilessly tests each actor’s ability to inhabit roles that are not characters but beings, summoned by an authorial process that can only be conceived as an occult attempt to restore speech to the dead.

Richard TermineBeard as Edmund and Irons as James in Long Day’s Journey Into Night at the Brooklyn Academy of Music, 2018

The sense of exhaustion is accentuated by those diabolical patterns of repetition that were O’Neill’s fundamental device. For some readers and playgoers, his repetitions are a flaw and a mark of stylized implausibility, sometimes eliciting nervous laughter. Stylized they may be—his theatrical mode is always expressionist at its core—yet they mark the seam where his sense of music and his sense of brute reality are joined. To diagram any of his plays by the frequency and arrangement of its repetitions, of words and behavior and the recurrence of memories, would be to define its essential shape. The bits and clumps of language his people grab at—“snoring” and “fog” and “quack” and “summer cold” and “willpower” and “morbid” and “cheap hotels” and “it’s a good man’s failing”—are turned around, tossed back and forth, questioned, and seized on as a last resort. If the Tyrones scarcely pause to search for a word, it is because they are condemned to repeat what they have said a thousand times before. In between the repeated words are the repeated sounds: the foghorn, Mary’s pacing footsteps, the comforting gurgle of whiskey poured into a glass.

Rob Howell’s set design gives us a sense of skewed perspective. The left side of the stage is dominated by an impressive ceiling-high bookcase with the matched sets of classics of which O’Neill writes in a stage direction: “The astonishing thing about these sets is that all the volumes have the look of having been read and reread.” The foreground has the minimal elements of a 1912 interior, some chairs, a couch, and the table where the booze is poured: this tentative space is where the family makes its gestures toward being a family. But the room’s timelessly abstract wall slants inward, its angle suggesting an arctic bareness in the house’s inner reaches, the jail-like abstraction of what, for Mary, can never be a home. As the characters back toward its various exits and passageways, each can duck toward some offstage escape hatch, whether the upstairs bedroom where Mary takes her morphine or the tavern where the men take their comfort. The world outside the room is invisible, and the lighting translates the foggy overcast weather into a darkness that sets in long before night falls.

That the prominence of the bookcase is not a casual touch comes to the fore in the last act, with its long passages of poetic declamation. The Tyrones are a literary family not by comfortable habit but for dear life. The autodidact James Tyrone has saved himself by literature, with dreams of becoming a great Shakespearean actor, and at the same time has betrayed literature by succumbing to the easy money of a popular melodramatic success. His life has been a matter of reciting words written by others; the alcoholic Jamie has the talent and the talk of a writer, but is incapable of being one; and Edmund, book in hand, has not yet become the writer he needs to be to avoid self-destruction. His only redemption will be the very play we have been watching, the play he has in extremis been able to write.

When Tyrone and Edmund confront each other in the last act, their weapons are Shakespeare on the one hand and Dowson and Baudelaire on the other. (The present production underscores this battle by expanding the passage from The Tempest recited by Tyrone, and Irons uses this to display at once the sincerity of Tyrone’s craft—his recitation is not a matter of bombastic bluff—and the impairment of his ability to summon up the lines.) Jamie’s belated drunken return is punctuated by recitations of Rossetti and Swinburne, a reminder that Edmund will never be entirely free of his brother’s influence. And as Mary, now terminally lost in memory, makes her final entrance, Jamie is given the play’s bitterest and most surefire laugh line—“The Mad Scene. Enter Ophelia!”—as if to certify that it has all been a play, but one from which the characters are unable to exit.

Eugene O’Neill’s Long Day’s Journey Into Night is at the Brooklyn Academy of Music through May 27.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/-yRf0IGwxu4/

Roth in the Review

Bob Peterson/The LIFE Images Collection/Getty ImagesPhilip Roth at Yaddo artist’s retreat, New York, 1968

A life in literary criticism: how Review writers read and responded to the novels of Philip Roth (1933–2018).

LeRoi Jones, “Channel X,” July 9, 1964

One of the gaudiest aspects of the American Establishment, as nation, social order, philosophy, etc., and all the possible variations of its strongest moral and social emanations, its emotional core, is its need to abstract human beings. It is a process that leads to dropping bombs.

Mr. Roth, you are no brighter than the rest of America, slicker perhaps. »

Alfred Kazin, “Up Against the Wall, Mama!”, February 27, 1969

Roth is pitiless in reducing Jewish history to the Jewish voice. “Why do you suffer so much?” the Italian “assistant” jeeringly challenges the Jewish grocer in Bernard Malamud’s novel. To which the answer of course comes (with many an amen! from Jesus, Marx, Freud, and others too numerous to mention)—“I suffer for you!” “Why do I suffer so much?” Alex Portnoy has to ask himself in Newark, Rome, Jerusalem (Alex is lonely even in the most crowded bed). His answer, his only answer, the final answer, what an answer, is that to which many a misanthropic son of the covenant is now reduced in this mixed blessing of a country—“My mother! My… Jewish mother!”

This is still funny? In Portnoy’s Complaint it is extremely funny, and the reason that Roth makes it funny is that he believes this, he believes nothing else. »

Murray Kempton, “Nixon Wins!”, January 27, 1972

That Mr. Roth failed should finally interest us less than why he chose to run. But then he is particularly interesting as a novelist just because his good fairy kept his bad fairy from inflicting upon him one of those guardian angels who protect the writer from unseemly adventures and therefore from redeeming risks. Roth has continually striven, from love or hate or a bit of both, to explain America to himself; and that is why he has so steadily managed to give us work that, if it cannot always be judged as satisfactory, has been unexpected and, what is more to the point, exhilarating. »

Frederick C. Crews, “Uplift,” November 16, 1972

What makes an image telling, Roth accurately observed in his interview about The Breast, is not how much meaning we can associate to it, but the freedom it gives the writer to explore his obsessions. But has he explored his obsessions in this book, or simply referred to them obliquely before importing a deus ex machina to whisk them away? In a sense The Breast is a more discouraging work than the straightforwardly vicious Our Gang. Aspiring to make a noble moral statement, Roth quarantines his best insights into the way people are imprisoned by their impulses. What would Alex Portnoy have had to say about that»

William H. Gass, “The Sporting News,” May 31, 1973*

So The Great American Novel is not about popcorn, peanuts, and crackerjack, or how it feels to sit your ass sore in the hot stands, but how the play is broadcast and reported, how it is radioed, and therefore it is about what gives the game the little substance it has: its rituals, its hymns, chants, litanies, the endless columns of its figures, like army ants, the total quality of its coverage, the breathless, joky, alliterating headlines which announce the doings of its mythologized creatures—those denizens of the diamond—everything, then, that goes into its recreation in the language of America: a manly, righteous, patriotic, and heroic tongue. »

Michael Wood, “Hooked,” Jun 13, 1974

My Life as a Man is a novel about not being able to write any other novel than the one you turn out to have written. The house of fiction becomes a house of mirrors, and this, presumably, is Roth’s problem as much as Tarnopol’s, since he did write thisnovel, and not another one. Fair enough: the problem is the theme, the novel enacts the problem. But then such arguments tend to fence with one’s doubts rather than make one entirely happy with the book. There remains a certain triviality there, a sense of the trap too eagerly embraced; an occasional sense of insufficient irony. »

Al Alvarez, “Working in the Dark,” April 11, 1985

What excites Roth’s verbal life—and provokes his readers—is, he seems to suggest, the opportunity fiction provides to be everything he himself is not: raging, whining, destructive, permanently inflamed, unstoppable. Irony, detachment, and wisdom are given unfailingly to other people. Even Diana, Zuckerman’s punchy twenty-year-old mistress who will try anything for a dare, sounds sane and bored and grown-up when Zuckerman is in the grip of his obsession. The truly convincing yet outlandish caricature in Roth’s repertoire is of himself. »

Gabrielle Annan, “Theme and Variations,” May 31, 1990

Roth is an aggressive writer. More aggressive than the Dadaists, or Henry Miller, or the Angry Young Men in Britain in the Fifties, or the Beat generation: he goes for the audience in the spirit of Peter Handke, who called one of his plays Offending the Audience. Roth challenges the reader to walk out, then woos him back again with cleverness and charm, and even an occasional touch of cuteness. Still, Maria walks out, and so does the mistress in Deception»

Harold Bloom, “Operation Roth,” April 22, 1993

At sixty, and with twenty books published, Roth in Operation Shylock confirms the gifts of comic invention and moral intelligence that he has brought to American prose fiction since 1959. A superb prose stylist, particularly skilled in dialogue, he now has developed the ability to absorb recalcitrant public materials into what earlier seemed personal obsessions. And though his context tends to remain stubbornly Jewish, he has developed fresh ways of opening out universal perspectives from Jewish dilemmas, whether they are American, Israeli, or European. The “Philip Roth” of Operation Shylock is very Jewish, and yet his misadventures could be those of any fictional character who has to battle for his identity against an impostor who has usurped it. That wrestling match, to win back one’s own name, is a marvelous metaphor for Roth’s struggle as a novelist, particularly in his later books, Zuckerman Bound, The Counterlife, and the quasi-tetralogy culminating in Operation Shylock, which form a coherent succession of works difficult to match in recent American writing. »

Frank Kermode, “Howl,” November 16, 1995

Checking through the old Roth paperbacks, one notices how many of them make the same bid for attention: “His most erotic novel since Portnoy’s Complaint,” or “his best since Portnoy’s Complaint,” or “his best and most erotic since Portnoy’s Complaint.” These claims are understandable, as is the assumption that Roth is likely to be at his best when most “erotic,” but that word is not really adequate to the occasion. There’s no shortage of erotic fiction; what distinguishes Roth’s is its outrageousness. In a world where it is increasingly difficult to be “erotically” shocking, considerable feats of imagination are required to produce a charge of outrage adequate to his purposes. It is therefore not easy to understand why people complain and say things like “this time he’s gone over the top” by being too outrageous about women, the Japanese, the British, his friends and acquaintances, and so forth. For if nobody feels outraged the whole strategy has failed. »

Elizabeth Hardwick, “Paradise Lost,” June 12, 1997

The talent of Philip Roth floats freely in this rampaging novel with a plot thick as starlings winging to a tree and then flying off again. It is meant perhaps as a sort of restitution offered in payment of the claim that if the author has not betrayed the Jews he has too often found them to be whacking clowns, or whacking-off clowns. He bleeds like the old progenitor he has named in the title. Since he is, as a contemporary writer, always quick to insert the latest item of the news into his running comments, perhaps we can imagine him as poor Richard Jewell, falsely accused in the bombing in Atlanta because, in police language, he fit the profile; and then at last found to be just himself, a nice fellow good to his mother.

And yet, and yet, the impostor, the devil’s advocate for the Diaspora has, with dazzling invention, composed not an ode for the hardy settlers of Israel, but an ode to the wandering Jew as a beggar and prince in Western culture, speaking and writing in all its languages. »

Robert Stone, “Waiting for Lefty,” November 5, 1998

Who would have thought, forty years ago, it would be Philip Roth, the gentrified bohemian, who would bring remembered lilacs out of that dead land for us, mixing memory and desire? But the fact is that, besides doing all the other marvelous things he does, Roth has managed to turn his bleak part of Jersey and its people into a kind of Jewish Yoknapatawpha County, a singularly vital microcosm with which to address the twists and turns of the American narrative. In his most recent work, he has turned his aging New Jerseyites into some of the most memorable characters in contemporary fiction. »

David Lodge, “Sick with Desire,” July 5, 2001

One might indeed have been forgiven for thinking that Sabbath’s Theater(1995) was the final explosive discharge of the author’s imaginative obsessions, sex and death—specifically, the affirmation of sexual experiment and transgression as an existential defiance of death, all the more authentic for being ultimately doomed to failure. Micky Sabbath, who boasts of having fitted in the rest of his life around fucking while most men do the reverse, was a kind of demonic Portnoy—amoral, shameless, and gross in his polymorphously perverse appetites, inconsolable at the death of the one woman who was capable of satisfying them, and startlingly explicit in chronicling them. Even Martin Amis admitted to being shocked. Surely, one thought, Roth could go no further. Surely this was the apocalyptic, pyrotechnic finale of his career, after which anything else could only be an anticlimax.

How wrong we were. »

J.M. Coetzee, “What Philip Knew,” November 18, 2004

Just how imaginary, however, is the world recorded in Roth’s book? A Lindbergh presidency may be imaginary, but the anti-Semitism of the real Lindbergh was not. And Lindbergh was not alone. He gave voice to a native anti-Semitism with a long prehistory in Catholic and Protestant Christianity, fostered in numbers of European immigrant communities, and drawing strength from the anti-black bigotry with which it was, by the irrational logic of racism, entwined (of all the “historic undesirables” in America, says Roth, the blacks and the Jews could not be more unalike). A volatile and fickle voting public captivated by surface rather than substance—Tocqueville foresaw the danger long ago—might in 1940 as easily have gone for the aviator hero with the simple message as for the incumbent with the proven record. In this sense, the fantasy of a Lindbergh presidency is only a concretization, a realization for poetic ends, of a certain potential in American political life. »

Daniel Mendelsohn, “The Way Out,” June 8, 2006

And indeed, just as his allegedly ordinary hero can’t help being a vividly Rothian type, it’s hard not to see, creeping into Roth’s annihilating pessimism here, an irrepressible sentimentality. What, after all, does it mean to commune with the bones of one’s parents in a cemetery—a communication that involves not only the hero talking to them, but them talking back—if not that we like to believe in transcendence, believe that there is, in fact, something more to our experience than just the concrete, just the bones, just the bits of earth? If the scene is moving, I suspect it’s because of the nakedness with which it exposes a regressive fantasy that seems to belong to the author as much as to his main character: once again, Roth reserves his best writing and profoundest emotion for the character’s relationship with his parents. This reversion to the emotional comforts of childhood seems to me to be connected to the deep nostalgia that characterizes this latest period of Roth’s writing (it’s at the core of The Plot Against America, too); it also seems to be something that Roth himself is aware of, and which, in a moment that is moving in ways he might not have intended, his everyman articulates. “But how much time could a man spend remembering the best of boyhood?” he muses during a sentimental trip to the New Jersey shore town he visited as a boy. It’s a question some readers may be tempted to ask, too. »

Charles Simic, “The Nicest Boy in the World,” October 9, 2008

His powerful new novel, Indignation, seethes with outrage. It begins with a conflict between a father and son in a setting and circumstances long familiar from his other novels going back to Portnoy’s Complaint, but then turns into something unexpected: a deft, gripping, and deeply moving narrative about the short life of a decent, hardworking, and obedient boy who pays with his life for a brief episode of disobedience that leaves him unprotected and alone to face forces beyond his control in a world in which old men play with the lives of the young as if they were toy soldiers. Roth’s novels abound in comic moments, and so does Indignation. His compassion for his characters doesn’t prevent him from noting their foolishness. »

Elaine Blair, “Axler’s Theater,” December 3, 2009

Among all the twinned characters in Roth’s body of work there is no starker contrast than that between Axler and Roth’s other would-be suicide (and performer), Mickey Sabbath of Sabbath’s Theater (1995). Sabbath’s life too has turned to shit, but his howl of grief is driven—for hundreds of pages—by a great vital force that seems inextinguishable. With The Humbling, the scope of the novel has shrunk to accommodate a subject who is stunned nearly silent by his loss. Axler is an ordinary man and cannot turn his own grief into scathing and hilarious soliloquy, and therefore into art. And the art that Axler knows so well offers no consolation. »

*Philip Roth, “Roth’s Novel,” July 19, 1973

In response to:

The Sporting News from the May 31, 1973 issue

To the Editors:

Please advise Professor Gass that I am too old to be grown up.

Philip Roth
New York City


Source Article from http://feedproxy.google.com/~r/nybooks/~3/rKKBJ67nhvY/

Escape From the Nazis: Anna Seghers’s Suspenseful Classic

The British MuseumDistant view with a mossy branch and a winding road, by Hercules Segers, 1610–1638

My first encounter with Anna Seghers’s novel Das siebte Kreuz (The Seventh Cross) was brief and painful. At some point in the mid-1990s—I must have been in tenth or eleventh grade—our German teacher announced that in the months to come we would be reading excerpts from an antiwar novel written in the days of the Third Reich. The announcement was greeted by the students with incredulity and protest. What? Such a big fat book! On top of that, the antiquated language and a plot that refused to get under way, quite aside from the fact that no one could keep track of all the characters.

I have a vague recollection that the story began with a description of the Rhine landscape I found hard to follow, and that the main character was constantly on the run. There was a feeling of general relief among the students when we were finally able to put the book aside. In all honesty and to my shame, I should add that I don’t have a single pleasant memory of any of the other books I read in school, from Goethe’s Faust to Günter Grass’s The Tin Drum to Paul Auster’s Moon Palace.

For almost a quarter of a century, that was my only acquaintance with Anna Seghers—until I recently looked up something in an entirely different context and got snagged on a still from a movie. It showed Spencer Tracy in a Hollywood film called The Seventh Cross. I was amazed: that unreadable old tome had been made into a movie! And with a star actor? My curiosity aroused, I read The Seventh Cross for a second time, and I devoured it in two days. After that, I understood why it was an international bestseller.

It had been a hit almost immediately after it was published in 1942—simultaneously in German by a publisher in exile in Mexico and in an English translation in the United States. Within six months, it had sold 421,000 copies in the US. To date, it has been translated into more than thirty languages. Then, in 1944, the Austria-born director Fred Zinnemann, who would make the western classic High Noon a few years later, filmed The Seventh Cross for Metro-Goldwyn-Mayer. Besides Tracy, the cast included Jessica Tandy, Hume Cronyn, and Helene Weigel (in her only film role during her American exile).

After the war, the novel was published to acclaim in Seghers’s native Germany. In 1947, in Darmstadt, Seghers was awarded the most important prize for German-language literature, the Georg Büchner Prize. The same year, she returned to Germany, moved to West Berlin, and joined the Communist party, the newly formed SED, in the zone occupied by the Soviets. She later moved to East Berlin and remained a citizen of the GDR until her death in 1983. In 1961, when Seghers, who had by then become the president of the Writers’ League of the GDR, did not condemn the building of the Berlin Wall, Günter Grass wrote her a letter appealing to her conscience, emphasizing the extraordinary position she held for him as well as his colleagues in the Federal Republic (West Germany): “It was you who taught my generation and anyone who had an ear to listen after that not-to-be-forgotten war to distinguish right from wrong. Your book, The Seventh Cross shaped me; it sharpened my vision, and allowed me to recognize the Globkes and Schröders under any guise, whether they’re called humanists, Christians, or activists.”

Later, after the Willy Brandt era, when West Germans had reconciled themselves to the existence of the GDR, The Seventh Cross assumed the position in the West that it had long held in the East: it became a book assigned in the schools. Indeed, the novel was rediscovered by the members of the ’68 generation who were protesting their parents’ deep silence about the Third Reich. And the novel continues to be listed in school curricula. It seems to have accomplished the leap into the twenty-first century.


Anna Seghers was born Netty Reiling, in Mainz in 1900, the only child of an upper-class Jewish family. Her father was a dealer in art and antiquities. Seghers always felt close ties to her native city. Decades later, at the age of seventy-five, she wrote in a telegram to the citizens of Mainz, “In the city where I spent my childhood, I received what Goethe called the original impression a person absorbs of a part of reality, whether it is a river, a forest, the stars, or the people.”

Archive, Aufbau Verlag, BerlinAnna Seghers, Paris, circa 1940

She published her first story in 1924, using the pen name Seghers. She married Laszlo Radvanyi, and the couple had two children, Peter (Pierre) and Ruth. Radvanyi was a Marxist, and Seghers herself became increasingly involved in the German Communist Party (KPD); around the same time, on playwright and novelist Hans Henny Jahnn’s recommendation, she was awarded the prestigious Kleist Prize for literature. A promising future seemed to lie ahead.

Then, in 1933, as in a stage drama, came the moment of peripeteia, a sudden, total reversal. In the year Hitler came to power Seghers, doubly endangered as both a Jew and a Communist, fled with her family to Switzerland. It was the beginning of a long odyssey. She lived in Paris—separated from her husband, who had been interned in a French concentration camp—until France was occupied by the Nazis in 1940. Alone with their two children, she managed first to organize his release and then orchestrate the family’s escape by ship via New York to Mexico City, where she would stay until 1947.

It was in Mexico that she learned of her mother’s fate: murdered in 1942 in the Lublin concentration camp in Poland. The message from the Jewish congregation of Mainz was matter-of-fact: “Mrs. Hedwig Reiling arrived in Piaski near Lublin in the month of March, 1942, and died there.”

Between May of 1938 and late in the summer of 1939, with world war imminent and in precarious circumstances, Anna Seghers wrote “a little novel,” as she called it at first, or as an early working title reads, the “7 Crosses Novel.”

According to her telling, there were originally just four copies of the manuscript, all of which she mailed off in hopes of being published. The first copy was destroyed during an air raid; a friend lost another while fleeing the Nazis; the third fell into the hands of the Gestapo; only the fourth copy, addressed to her German publisher in the United States, arrived at its destination. However, she herself hadn’t kept a copy of her manuscript because the danger of its being found in her apartment by a police raid—a constant fear of hers, even in neutral Mexico—was too great.

The Boston publishing house Little, Brown accepted the novel for publication, but at first, Seghers, at that time the sole support of her family, saw no money from it. The modest author’s advance was withheld in order to pay for the translation. In 1942, the publisher F.C. Weiskopf, by then a friend, wrote her a letter with the happy news that her novel had been selected by the Book of the Month Club: “Be glad, my people, Manna has rained down from heaven.” But it wasn’t until the following year that Seghers started receiving a monthly royalty payment of $500. The breakthrough came with the Hollywood filming. Seghers was paid the fabulous sum of $75,000 in four installments, the last in 1946. This, at least, brought to an end the time of financial distress.


The Seventh Cross is an example of something rare in the literature of the German language: a brilliantly written novel that keeps alive one of the most important chapters of German history—though I can still see why as a student I thought the book was old-fashioned. The grammar is complex, the language at times curious, its female characters oddly passive. So what gives The Seventh Cross its literary quality?

First, something quite simple: Anna Seghers, it turns out, was a veritable master of suspense. It’s obvious why Hollywood grabbed this book—not just for the popular prison-escape motif that makes for breathtaking action, but also because of the cliffhanging delays in the narrative sequence. The central plot, Heisler’s escape, is not told straight through; instead, it is constantly interrupted by jumps in the story to one of the more than thirty other characters in the novel. As for Heisler, he is what used to be called “a real man.” Rough and inscrutable, even ruthless, he is a man who left his wife and baby for another woman; even amid terror and horror, he is a womanizer who is even allowed, at the end, a flirtation with a waitress.

Aufbau Verlag, Berlin, 1995A panel from the graphic version of The Seventh Cross, illustrated by William Sharp, 1942; click to enlarge

The book also takes a filmic approach to form. The transition from the last sentence of the prologue, “Where might he be by now?,” to the beginning of the first chapter and the description of Franz and his cheerful bicycle ride suggests a classic cross-fade. At the same time, The Seventh Cross is characterized throughout by very strong visual symbolism—as we see in the use of Christian iconography, beginning with the cross in the title. The motif of the seven crosses is not Seghers’s invention, but rather a particularly perfidious punishment that was actually meted out in 1936 at the Sachsenhausen concentration camp after an escape (fatally unsuccessful), an incident the author had no doubt heard of. Other Christian references include the allusion to the dragon slayer in the main character’s first name; the first night of his flight spent in the Mainz Cathedral; and, not least, the number seven, which turns up not only in the cross of the title, but also in the basic seven-chapter structure of the novel, covering a week, from Monday to Sunday. So, in a certain way, it is also a creation story, at the end of which, although not everything turns out all right, a few things do somehow work out, at least for the protagonist George Heisler.

Such Christian images catch the eye, though their use in the novel is largely unrelated to their original significance. The Mainz Cathedral may have been constructed from the “inexhaustible strength of the people,” but it is depicted, alongside descriptions of the “almost excessively proud” bishops and kings, as a “refuge in which one can freeze to death.” The seventh cross remains empty; George Heisler is no messiah, but rather an ordinary human being with all his weaknesses who won’t let himself be consoled with some abstract future; he is about the here and now. He intends to go to Spain to fight against the fascists.

In contrast to the writing of her advocate Hans Henny Jahnn or of Thomas Mann, Seghers’s sentences are artfully simple. Everything serves to create clarity, and describe action, such as Belloni’s flight across the rooftops. The vivid descriptions of nature lead me to surmise that the young Netty Reiling intentionally chose the name of an artist as her pen name—that of the Dutch painter Hercules Segers (1590–1638), who was known above all for his realistic landscapes and who, to some degree, influenced Rembrandt. Moreover, Seghers’s 1924 doctoral dissertation was titled “Jude und Judentum im Werke Rembrandts” (“The Jew and Jewishness in the Work of Rembrandt”). There, too, she was interested in the depiction of unadulterated reality, for, after all, it was the unassimilated Eastern Jews in all their poverty who served as Rembrandt’s models rather than members of the “brilliant Sephardic congregation,” the official Jewish community. Rembrandt was much more concerned with “rendering real Jewish individuals from his knowledge of their essential nature and their appearance.”

Similarly, Seghers worked “from reality,” and with an effect that was admired by her colleagues. But how was it possible for her, living in exile, to present such an intense and accurate picture of contemporary Germany? For one thing, there was the previously mentioned “original impression,” that Seghers absorbed in her childhood of the landscape in the environs of Mainz. For another, she did careful, thorough research. Seghers spoke with fellow refugees and read the voluminous KPD-inspired Braunbuch über Reichstagsbrand und Hitlerterror (Brown Book About the Reichstag Fire and the Hitler Terror) and a report by the Munich Communist Party delegate Hans Beimler dealing with his imprisonment in the Dachau concentration camp, from which he escaped to fight, like George Heisler, in the Spanish Civil War, in which he was killed in 1936.

But another ingredient is added to this feast of detailed description and the everyday ordinary, culminating in the character of Ernst the shepherd who stands on his hill literally above everything. It is something that points to a metaphysical (not to be confused with religious) dimension behind the novel’s tangibly concrete aspect. It is the voice of the omniscient narrator. It is the invisible and omnipresent intellect hovering over and around the characters and seeing to it that what happens does not remain futile and meaningless, even at moments of the greatest brutality. The voice knows about the “eiserner Bestand,” the “emergency reserves” that people find within themselves, or as it says in the remarkable concluding statement of the nameless collective voice: “We all felt how profoundly and how terribly outside forces can reach into a human being, to his innermost self. But we also sensed that in that innermost core there was something that was unassailable and inviolable.”

It may be that, in the aftermath of 1945, when the totality of the horrors of National Socialist rule became known, horrors that no doubt exceeded even Anna Seghers’s powers of imagination at the time, such a passage sounds almost too benign. Yet it is precisely these authorial passages that touched me most deeply on rereading the book. Today, the gruesome acts of the camp commandant would be presented more graphically; the details of Heisler’s flirtation at the end would probably be stretched out; above all, though, a modern author would strike an ironic note, perhaps a cynical one, since now after two world wars we know what a hopeless case our species is. But considering that Anna Seghers, in a moment of extreme existential danger, created, in spite of it all, this literary credo to humanism, this beacon that inspires us, in an ambiguous, crepuscular world full of inhuman barbarity, with the courage never to give up, to retain our humanity no matter what the cost—for that we should and must be grateful.

Adapted from the afterword to Anna Seghers’s The Seventh Cross, translated by Margot Bettauer Dembo, which is published by New York Review Books.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/9hvaruoSzCQ/

The Mass Murder We Don’t Talk About

Ronald Kabuui/AFP/Getty ImagesRwandan president Paul Kagame receiving the Pearl of Africa Medal from Ugandan president Yoweri Museveni, Kapchorwa District, Uganda, 2012

During the 1990s, unprecedented violence erupted in Central Africa. In Sudan, the civil war intensified; in Rwanda, there was genocide; in Congo millions died in a conflict that simmers to this day; and in Uganda, millions more were caught between a heartless warlord and an even more heartless military counterinsurgency.

This wasn’t supposed to happen. Although the US had for decades backed dictatorships and right-wing rebels across the continent, George H.W. Bush had declared in his 1989 inaugural speech that “a new breeze [was] blowing…. For in man’s heart, if not in fact, the day of the dictator is over. The totalitarian era is passing…. Great nations of the world are moving toward democracy through the door to freedom.”

Bush and his successors supported peace on much of the African continent by funding democracy promotion programs and sanctioning, or threatening to sanction, South Africa and other countries if their leaders didn’t allow multiparty elections and free political prisoners. But in Uganda, Ethiopia, and a small number of other countries, the Bush and Clinton administrations lavished development and military aid on dictators who in turn funneled weapons to insurgents in Sudan, Rwanda, and Congo. In this way, Washington helped stoke the interlinked disasters that have claimed millions of lives since the late 1980s and still roil much of eastern and central Africa today. The complicity of the US in those disasters has not yet been sufficiently exposed, but Judi Rever’s In Praise of Blood explores how Washington helped obscure the full story of the genocide that devastated Rwanda during the 1990s and cover up the crimes of the Rwandan Patriotic Front (RPF), which has ruled the country ever since.

The familiar story about the Rwandan genocide begins in April 1994, when Hutu militias killed hundreds of thousands of Tutsis, mostly with machetes and other simple weapons. The RPF, a Tutsi-dominated rebel army, advanced through the mayhem and finally brought peace to the country in July. The RPF’s leader, Paul Kagame, eventually became president of Rwanda and remains in power today. He has overseen a technocratic economic revival, the installation of one of the best information technology networks in Africa, and a sharp decline in maternal and child mortality. Political dissent is suppressed, many of Kagame’s critics are in jail, and some have even been killed—but his Western admirers tend to overlook this. Bill Clinton has praised Kagame as “one of the greatest leaders of our time,” and Tony Blair’s nonprofit Institute for Global Change continues to advise and support his government.

Over the years, less valiant portraits of Kagame and the RPF have appeared in academic monographs and self-published accounts by Western and Rwandan academics, journalists, and independent researchers, including Filip Reyntjens, André Guichaoua, Edward Herman, Robin Philpot, David Himbara, Gérard Prunier, Barrie Collins, and the BBC’s Jane Corbin. Taken together, they suggest that the RPF actually provoked the war that led to the genocide of the Tutsis and committed mass killings of Hutus before, during, and after it. In Praise of Blood is the most accessible and up-to-date of these studies.

Rever’s account begins in October 1990, when several thousand RPF fighters invaded Rwanda from neighboring Uganda. The RPF was made up of refugees born to Rwandan parents who fled anti-Tutsi pogroms during the early 1960s and were determined to go home. Its leaders, including Kagame, had fought alongside Uganda’s president Yoweri Museveni in the war that brought him to power in 1986. They’d then been appointed to senior Ugandan army positions—Kagame was Museveni’s chief of military intelligence in the late 1980s—which they deserted when they invaded Rwanda.

In August 1990, two months before the RPF invasion, the Hutu-dominated Rwandan government had actually agreed, in principle, to allow the refugees to return. The decision had been taken under enormous international pressure, the details were vague, and the process would likely shave dragged on, or not occurred at all. But the RPF invasion preempted a potentially peaceful solution to the refugee conundrum. For three and a half years, the rebels occupied a large swath of northern Rwanda while the Ugandan army supplied them with weapons, in violation of the UN Charter and Organization of African Unity rules. Washington knew what was going on but did nothing to stop it. On the contrary, US foreign aid to Uganda doubled in the years after the invasion, and in 1991, Uganda purchased ten times more US weapons than in the preceding forty years combined.

During the occupation, roughly a million Hutu peasants fled RPF-controlled areas, citing killings, abductions, and other crimes. An Italian missionary working in the area at the time told Rever that the RPF laid landmines around springs that blew up children, and invaded a hospital in a town called Nyarurema and shot nine patients dead. According to Alphonse Furuma, one of the founders of the RPF, the purpose was to clear the area, steal animals, take over farms, and, presumably, scare away anyone who might think of protesting. The Ugandan army, which trained the RPF, had used similar tactics against its own Acholi people during the 1980s and 1990s, so these accounts seem plausible.

At least one American was angry about the RPF invasion. US ambassador to Rwanda Robert Flaten witnessed how it sent shock waves throughout the country, whose majority-Hutu population had long feared a Tutsi attack from Uganda. Flaten urged the Bush administration to impose sanctions on Uganda for supplying the RPF, noting that Saddam Hussein had invaded Kuwait only two months earlier and been met with near-universal condemnation, a UN Security Council demand that he withdraw, and a US military assault.

By contrast, the Bush administration, which was then supplying most of Uganda’s budget through foreign aid, treated the RPF invasion of Rwanda with nonchalance. When it took place, Museveni happened to be visiting the US. He assured State Department officials that he’d known nothing about it, and promised to prevent weapons from crossing the border and court-martial any defectors who attempted to return to Uganda. He then did neither, with the apparent approval of US diplomats. In 1991 and 1992 US officials met RPF leaders inside Uganda and monitored the flow of weapons across the border, but made no effort to stop it, even when the Rwandan government and its French allies complained.

Years later, Bush’s assistant secretary of state for Africa Herman Cohen expressed regret for failing to pressure Museveni to stop supporting the RPF, but by then it was too late. At the time, Cohen maintained that the US feared that sanctions might harm Uganda’s robust economic growth. But he hasn’t explained why Washington allowed the RPF—by invading Rwanda—to ruin that country’s economy, which had previously been similarly robust. Robert Gribbin, a diplomat then stationed at the US embassy in Kampala, has claimed that sanctions weren’t considered because they might have interfered with Uganda’s “nascent democratic initiatives,” without mentioning that Museveni’s security forces were torturing and jailing members of Uganda’s nonviolent opposition and also pursuing a brutal counterinsurgency in northern Uganda that would claim hundreds of thousands of Ugandan lives.

The UN may also have turned a blind eye to Museveni and Kagame’s schemes. In October 1993 a contingent of UN peacekeepers was deployed to help implement a peace agreement between the RPF and the Rwandan government. One of its mandates was to ensure that weapons, personnel, and supplies didn’t cross into Rwanda from Uganda. But when the peacekeepers’ commander, Canadian general Roméo Dallaire, visited the Ugandan border town of Kabale, a Ugandan officer told him that his peacekeepers would have to provide twelve hours’ notice so that escorts could be arranged to accompany them on patrols. Dallaire protested, since the element of surprise is crucial for such monitoring missions. The Ugandans stood their ground, and also refused to allow Dallaire to inspect an arsenal in Mbarara, a Ugandan town about eighty miles from the Rwandan border, which was rumored to be supplying the RPF.

Dallaire has not said whether he brought Uganda’s obstruction to the attention of the Security Council, and he didn’t respond to my interview requests. But in 2004 he told a US congressional hearing that Museveni laughed in his face when they met at a gathering to commemorate the tenth anniversary of the genocide. “I remember that UN mission on the border,” Dallaire said Museveni had told him. “We maneuvered ways to get around it, and of course we did support the movement [i.e., the RPF invasion].”

The likely reasons why Washington and the UN apparently decided to go easy on Uganda and the RPF will be explored in the second part of this article. But for Rwanda’s President Juvénal Habyarimana and his circle of Hutu elites, the invasion seems to have had a silver lining. For years, tensions between Hutus and Tutsis inside Rwanda had been subsiding. Habyarimana had sought reconciliation with Tutsis living in Rwanda—so-called internal Tutsis—by reserving civil service jobs and university places for them in proportion to their share of the population. Though desultory, this program was modestly successful, and the greatest rift in the country was between the relatively small Hutu clique around Habyarimana and the millions of impoverished Hutu peasants whom they exploited as brutally as had the Tutsi overlords of bygone days. While the elites fattened themselves on World Bank “anti-poverty” projects that created lucrative administrative jobs and other perks but did little to alleviate poverty, they continued to subject the Hutu poor to forced labor and other abuses.

Habyarimana, like the leaders of Malawi, Ghana, Zambia, and other countries, was under pressure from the US and other donors to allow opposition parties to operate. Many of these new parties were ethnically mixed, with both Hutu and Tutsi leaders, but they were united in criticizing Habyarimana’s autocratic behavior and nepotism and the vast economic inequalities in the country.

The RPF invasion seems to have provided Habyarimana and his circle with a political opportunity: now they could distract the disaffected Hutu masses from their own abuses by reawakening fears of the “demon Tutsis.” Shortly after the invasion, Hutu elites devised a genocidal propaganda campaign that would bear hideous fruit three and a half years later. Chauvinist Hutu newspapers, magazines, and radio programs reminded readers that Hutus were the original occupants of the Great Lakes region and that Tutsis were Nilotics—supposedly warlike pastoralists from Ethiopia who had conquered and enslaved Hutus in the seventeenth century. The RPF invasion, they claimed, was nothing more than a plot by Museveni, Kagame, and their Tutsi coconspirators to reestablish this evil Nilotic empire. Cartoons of Tutsis killing Hutus began appearing in magazines, along with warnings that all Tutsis were RPF spies bent on dragging the country back to the days when the Tutsi queen supposedly rose from her seat supported by swords driven between the shoulders of Hutu children.

In February 1993 an RPF offensive killed hundreds, perhaps thousands of Hutus in the northern prefectures of Byumba and Ruhengeri, further inflaming anti-Tutsi sentiment. At the time, the Organization of African Unity was overseeing peace negotiations between the RPF and the government, but the process was fraught. Habyarimana knew the RPF was better armed, trained, and disciplined than his own army, so under immense international pressure he agreed in August 1993 to a peace accord that would grant the RPF seats in a transitional government and nearly half of all posts in the army.

Even Tutsis inside Rwanda were against giving the RPF so much power because they knew it would provoke the angry, fearful Hutus to rebel, and they were right. Hutu mayors and other local officials were already stockpiling rifles, and government-linked anti-Tutsi militia groups (including the notorious Interahamwe) were distributing machetes and kerosene to prospective génocidaires. In December 1993, a picture of a machete appeared on the front page of one Hutu-chauvinist publication under the headline “What Weapons Can We Use to Defeat the Inyenzi [Tutsi Cockroaches] Once and For All?” The following month, the CIA predicted that if tensions were not somehow defused, hundreds of thousands of people might die in ethnic violence. This powder keg exploded four months later, when on April 6, 1994, a plane carrying Habyarimana was shot down as it was preparing to land in Kigali, the capital.

The French sociologist André Guichaoua happened to be in Kigali that evening. The country was tense, but peaceful. But Hutu military personnel panicked when they heard about the crash. That night they began hastily erecting roadblocks around government and army installations, while militiamen, many from the presidential guard, began moving into position. The killing of Tutsis began the following afternoon. According to Guichaoua, Tutsis suspected of collaboration with the RPF, which the killers blamed for the plane crash, were sought out first, but soon the militias were killing every Tutsi they could get their hands on. The vast majority of the victims would turn out to be internal Tutsis, who had nothing to do with the RPF.

Scott Peterson/Liaison/Getty ImagesRwandan Patriotic Front soldiers preparing to march into Kigali, Rwanda, 1994

For decades, blame for the plane crash that set off the genocide has fallen on members of Habyarimana’s army who were believed to be unhappy about the terms of the August 1993 peace accord. However, a growing number of academic studies, judicial reports, and other investigations now suggest RPF responsibility. They are based on eyewitness testimony from multiple RPF defectors who say they were involved in the planning and execution of the plot, as well as evidence concerning the origin of the missiles.

It’s unclear what motive the RPF would have had for shooting down the plane, but it may have wanted to ignite a war in order to abrogate the August accord, which called for elections twenty-two months after implementation. The RPF, dominated by the unpopular minority Tutsis and widely hated for its militancy, including by many internal Tutsis, would certainly have lost.

The RPF began advancing almost as soon as the plane hit the ground, and even before the genocide of the Tutsis had begun. According to Rever, the rebels actually made the situation worse. While Hutus were massacring innocent Tutsis, the RPF was further inciting ethnic hatred by massacring innocent Hutus. In mid-April RPF officers assembled some three thousand Hutu villagers in a stadium in Byumba and slaughtered virtually all of them. In June RPF soldiers attacked a seminary in Gitarama, killing several Hutu priests, and then, according to a four-hundred-page report compiled by a respected priest and human-rights activist named André Sibomana, proceeded to massacre roughly 18,000 others in the prefecture.

RPF defectors told Rever that the purpose of these mass killings was to strike fear in the Hutu population and provoke them to escalate the genocide into such a horrific crime that no political compromise with the former leaders would ever be possible. The August 1993 peace accord would then be irrelevant, and the population would have no choice but to accept an RPF takeover.

Some RPF operatives told Rever that they had even infiltrated Hutu militia groups to stoke ethnic anger and incite ever more indiscriminate reprisals against Tutsis. Again, this seems plausible to me. Kagame and other RPF commanders may have learned such strategies in Uganda while fighting alongside Museveni, whose rebel army reportedly committed similar “false flag” operations in the 1980s. After the genocide, war broke out in neighboring Zaire, as Congo was then known. When assailants killed hundreds of Congolese Tutsi refugees inside Rwanda in December 1997, US officials, Amnesty International, and The New York Times all blamed Hutu insurgents, but RPF sources told Rever that they themselves had done it. “Everyone knew that the RPF staged that attack. It was common knowledge in intelligence circles,” a former RPF officer told Rever. It was a “brilliant and cruel display of military theater,” said another.

Dallaire, the commander of the peacekeepers, remained in Rwanda during the genocide. In his harrowing memoir, Shake Hands with the Devil, he expresses puzzlement about the RPF’s troop movements. Rather than heading south, where most of the killings of Tutsis were taking place, the RPF circled around Kigali. When Dallaire met Kagame at the latter’s headquarters, he asked him why. “He knew full well that every day of fighting on the periphery meant certain death for Tutsis still behind [Rwandan government] lines,” Dallaire writes. Kagame “ignored the implications of my question.” By the time the RPF reached the capital weeks later, most of the Tutsis there were dead.

In May 1994, while supplies continued to flow to the RPF from Uganda, the UN placed the Rwandan government army, some of whose soldiers had participated in massacres of the Tutsis, under an arms embargo. By the end of July, the much stronger RPF had taken control of nearly all of the now ruined country. As it advanced, some two million Hutus fled, either to the giant Kibeho camp in southwestern Rwanda or to camps over the border in Tanzania and Zaire. Some Hutus returned home in the fall of 1994, but according to a UN report prepared by the human rights investigator Robert Gersony, many of them were killed by the RPF, either on suspicion of sympathy with revanchist Hutu militants or simply to terrify others.* These killings stopped during the run-up to a donor meeting in Geneva in January 1995, but then resumed after $530 million in aid was pledged.

Hutus once again fled to Kibeho, where they thought they would be protected by UN peacekeepers. But in April 1995 the RPF fired on the camp and then stormed it while helpless aid workers and UN troops, under orders to obey the RPF, stood by. At least four thousand Hutus, probably more, were killed, including numerous women and children. Thomas Odom, a retired US army colonel stationed at the embassy in Kigali, blamed the killings on Hutu instigators within the refugee population who, he says, stirred up the crowds, provoking panicked RPF soldiers to shoot. Several eyewitnesses dispute this.

In the enormous refugee camps in Zaire, Hutu militants—many of whom had participated in the genocide—began mobilizing to retake the country and launched sporadic attacks inside Rwanda. The RPF’s reaction was fierce, swift, and cruel. Hutu villagers who had nothing to do with the militants were invited to peace-and-reconciliation meetings, then shot point-blank or beaten to death with garden hoes. In 1997, thousands of Hutus fleeing indiscriminate RPF reprisals sought refuge in caves near the Virunga Mountains, where they were trapped and killed by RPF soldiers. Thousands more were killed in the environs of the town of Mahoko around the same time.

In order to neutralize the mounting threat from the Zairean refugee camps, the RPF crossed the border in 1996, invaded them, and herded most of the refugees home. But hundreds of thousands refused to return to Rwanda and fled deeper into Zaire. Some were ex-génocidaires and other Hutu militants, but most were ordinary Hutus understandably terrified of the RPF. Kagame’s commandos, who had by then received training from US Special Forces, tracked them down in towns and villages across the country and killed them. Hundreds of thousands remain unaccounted for.

To hunt down fleeing Hutus, RPF spies deployed satellite equipment provided by the US. The RPF also infiltrated the UN refugee agency and used its vehicles and communications equipment. US officials insisted that all the fleeing refugees were Hutu génocidaires and downplayed the number of genuine refugees identified by their own aerial studies, but in 1997 Rever, then a young reporter for Radio France Internationale, trekked through the forest and found vast encampments of malnourished women and children. She interviewed a woman who had seen her entire family shot dead by Kagame’s soldiers, a boy whose father had drowned while fleeing the RPF, and aid workers who told her they had seen mass graves that were too dangerous to visit because they were being guarded by Kagame’s soldiers.

Versions of Rever’s story have been told by others. While all contain convincing evidence against the RPF, some are marred by a tendency to understate the crimes of the Hutu génocidaires
or overstate the RPF’s crimes. But some, including the work of Filip Reyntjens, a Belgian professor of law and politics, have been both measured and soundly researched. Kagame’s regime and its defenders have dismissed them all as propaganda spouted by defeated Hutu génocidaires and genocide deniers. But Rever’s account will prove difficult to challenge. She has been writing about Central Africa for more than twenty years, and her book draws on the reports of UN experts and human rights investigators, leaked documents from the International Criminal Tribunal for Rwanda, and hundreds of interviews with eyewitnesses, including victims, RPF defectors, priests, aid workers, and officials from the UN and Western governments. Her sources are too numerous and their observations too consistent for her findings to be a fabrication.

The official UN definition of genocide is not restricted to attempts to eradicate a particular ethnic group. It includes “killings…with the intent to destroy, in whole, or in part, a national, ethnical, racial or religious group” (my emphasis). The RPF’s operations against the Hutus in the Byumba stadium, in Gitarama, Kibeho, the caves near Virunga, around Mahoko, and in the forests of Zaire do seem to fit that description. The RPF’s aim was, presumably, not to eradicate the Hutus but to frighten them into submission.

And yet in January, the UN officially recognized April 7 as an International Day of Reflection on the 1994 Genocide Against the Tutsis—only the Tutsis. That is how the conflagration in Rwanda is generally viewed. And while the French army has been accused of supplying the Rwandan government with weapons during the genocide, US officials have faced no scrutiny for lavishing aid on Uganda’s Museveni while he armed the RPF in violation of international treaties and the August 1993 peace accord. Why have international observers overlooked the other side of this story for so long? And why are the RPF’s crimes so little known outside of specialist circles? That will be the subject of the second part of this article.

  1. *

    After the genocide, numerous human rights reports described the ongoing killing of Hutus inside Rwanda. Gersony’s concluded that after the genocide officially ended, the RPF killed over 25,000 civilians, most of them Hutus, inside Rwanda, as well as two Canadian priests, two Spanish priests, a Croatian priest, three Spanish NGO volunteers, and a Belgian school director who attempted to report on RPF atrocities. Gersony submitted his report to UN High Commissioner for Refugees Sadako Ogata, who passed it on to UN Secretary-General Boutros Boutros-Ghali and Kofi Annan, who decided to delay its release. Timothy Wirth, then US undersecretary of state for global affairs, met Gersony in Kigali and said the findings were “compelling.” But at a briefing back in Washington, he downplayed the report, claiming the author had been misled by his informants. Wirth admitted the RPF had killed people, but said it wasn’t “systematic.” 

Source Article from http://feedproxy.google.com/~r/nybooks/~3/WGKhUzRI14w/

A Gandhian Stand Against the Culture of Cruelty

Nickelsberg/Liaison/Getty ImagesRahul Gandhi, center, with his mother, Sonia, and family at the funeral of his assassinated father, former Prime Minister Rajiv Gandhi, New Delhi, May 24, 1991

The bomb that killed Rajiv Gandhi on May 21, 1991, blew his face off. India’s former prime minister, and scion of the Nehru-Gandhi dynasty, was identified by his sneakers as he lay spread-eagled on the ground. Some Indian newspapers, refusing dignity to the dead and his survivors, published a picture of Gandhi’s half-dismembered body. I remembered the image recently when I read about the reaction of Rajiv’s son, Rahul Gandhi, which he related earlier this year, to a similar image of Velupillai Prabhakaran, the mastermind behind his father’s assassination.

In 2009, Sri Lankan ultra-nationalists had exulted in photographs of the lifeless Prabhakaran, the much-hated terrorist chief of the Tamil Tigers, who pioneered suicide bombings; he was allegedly tortured by the Sri Lankan military before being executed (his twelve-year-old son was certainly murdered in cold blood). But watching Sri Lankans parade Prabhakaran’s mutilated corpse, Gandhi wondered, “Why they are humiliating this man in this way?” He recalled feeling “really bad for him and for his kids and I did that because I understood deeply what it meant to be on the other side of that thing.” 

Such generosity of heart could not have come easily to Gandhi. He grew up playing badminton with the Sikh bodyguards of his grandmother, Indira. These same men would, in 1984, empty their guns into her frail body at her home in Delhi. Gandhi says that he was angry for years over the cruel killings of his father and his grandmother, but he now understands that these events take place in a history, where individuals get caught in the collision of “ideas” and “forces.” He and his sister, Gandhi added, had also “completely forgiven” the people convicted of his father’s assassination. And he did not even mention that his mother, Sonia Gandhi, had successfully appealed to the Indian president to commute the death sentence of one of those convicted to life imprisonment after the condemned woman gave birth to a girl in prison.

Gandhi’s expression of forgiveness was barely noticed amid the roll call of atrocities that constitutes news these days. But it illustrates perfectly the essential condition for compassion that Jean-Jacques Rousseau defined in Emile, or On Education: an awareness that we are as vulnerable as those who suffer. “Why are the rich so hard toward the poor?” Rousseau considered. “It is because they have no fear of becoming poor.” Socio-economic and cultural hierarchies make it harder for the powerful and wealthy to empathize with the weak and poor. Nevertheless, a conscientious student of life ought, he wrote, to “understand well that the fate of these unhappy people can be his, that all their ills are there in the ground beneath his feet, that countless unforeseen and inevitable events can plunge him into them from one moment to the next.”

It is easy to question Gandhi’s understanding of what Rousseau called “the vicissitudes of fortune.” His notion that “in politics, when you mess with the wrong forces, and if you stand for something, you will die” simplifies the historical record. In fact, his grandmother stoked Sikh militancy and trained Prabhakaran’s guerrillas—cynical political choices that contributed to her and her son’s violent deaths. Terrible things can happen to people through no fault of their own, but victims are also agents. Rahul Gandhi himself has chosen to exercise his dynastic prerogative—following his great-grandfather, grandmother, father, and mother—and lead the Congress party, which ruled India for much of its seventy-one years before Congress became stigmatized as a bastion of hereditary privilege and was electorally humiliated by the Hindu nationalist demagogue Narendra Modi.

Furthermore, Gandhi gives no sign of breaking with the Hindu majoritarianism that his own party expediently forged. Discouragingly, victimhood rarely makes for wisdom or humility among South Asian dynasts; like their counterparts elsewhere, such as Saif al-Islam Gaddafi, Mohammed bin Salman, and Ivanka Trump, they unabashedly pursue their claim to unentitled power, wealth, and celebrity. Before her assassination in 2007, Benazir Bhutto, the daughter of Pakistan’s murdered prime minister, had acquired a tawdry fortune in real estate stretching from Surrey, England, to Florida in the US at the expense of the destitute masses she claimed to represent. Since 1991, Sheikh Hasina, the daughter of Bangladesh’s assassinated founding father, has taken turns to plunder her poor country with her fierce rival, Khaleda Zia, the widow of another murdered leader. Forcibly sterilizing millions of poor men in the 1970s, Sanjay Gandhi, Rahul’s uncle, incarnated a ruthlessness that is endemic among pampered political scions. And yet, Gandhi’s willed renunciation of animosity today is significant in a public culture convulsed by hatred and rancor. 

If, as Edmund Burke wrote, the “most important of all revolutions” is “a revolution in sentiments, in manners, and moral opinions,” then it has erupted with vicious force in an India ruled by Hindu supremacists. The country “is sliding toward a collapse of humanity and ethics in political and civic life,” the Indian writer Mitali Saran wrote in The New York Times last month. Her phrasing did not seem melodramatic to those who have seen pictures of demonstrations, led by women, in support of the eight alleged Hindu rapists and murderers of an eight-year-old Muslim girl. Faith in humanity is unlikely to survive contact with the politicians, police officials, and lawyers who ideologically justify the rape of a child; and reason and logic will seem the slave of vile passions when manifested in the whataboutism, driven by fake news, of social media “influencers,” who include a pioneering feminist publisher and an information technology tycoon.

India is undergoing a process of dehumanization—organized disgust for the religious/ethnic/civilizational “alien,” a retreat into grandiose fantasies of omnipotence, followed by intellectual rationalization of murder—not unlike what the world witnessed in Europe in the middle of the last century. More ominously, this moral calamity in the world’s largest democracy is part of a global rout of such basic human emotions as empathy, compassion, and pity. In Israel, another much-garlanded democracy, public opinion emphatically endorses the massacres of young protesters in Gaza. President Trump’s zero-tolerance policy of separating migrant families at the US border (“The children will be taken care of—put into foster care or whatever,” according to his chief of staff) and British Prime Minister Theresa May’s “hostile environment” campaigns against elderly black citizens are merely explicit expressions of a widely sanctioned ruthlessness. W.H. Auden’s words from In Memory of W.B. Yeats, written during Europe’s “low dishonest decade,” resonate more widely today.

In the nightmare of the dark
All the dogs of Europe bark,
And the living nations wait,
Each sequestered in its hate;

Intellectual disgrace
Stares from every human face,
And the seas of pity lie
Locked and frozen in each eye.

Liberal detractors of Trump, Modi, and other elected demagogues set great store by democracy’s impersonal institutions, and their checks and balances. But political and culture wars among groups sequestered in their hate have reached a new peak of ferocity; and faith in the rules, norms, and laws of liberal democracy seems too complacent. In any case, as Alexis de Tocqueville once wrote, “political societies are not what their laws make them, but what sentiments, beliefs, ideas, habits of the heart, and the spirit of the men who form them.” In other words, our political and intellectual gridlock is largely caused by an extensive moral, imaginative and emotional failure—the many frozen seas of pity.

Tocqueville believed that compassion could mitigate the effects of the individualist way of life pioneered in the United States. It could counter the self-centered acquisitiveness and isolation of Homo democraticus (his term), bringing together people that the imperatives of life in a competitive society of supposed equals—envy, vanity, insecurity—tended to divide. In this pragmatic view, compassion was more than just a private virtue—one enjoined by traditional religions and classical philosophies. Indeed, the greatest thinkers of the modern democratic revolution identified compassion as its essential ingredient: close emotional identification with fellow citizens, even in their misfortune, and a reflexive repugnance at the sight of their suffering. Rousseau was convinced that compassion for one’s fellow citizens rather than individual reason or self-interest was the strongest basis for a decent society of equals. Identifying amour-propre as the central pathology of modern commercial society, he knew that its psychic wounds could only be healed by renouncing omnipotence and acknowledging that all human beings are vulnerable. “Thus from our weakness,” he concluded, “our fragile happiness is born.” 

The puzzle of our age is how this essential foundation of civic life went missing from our public conversation, invisibly replaced by the presumed rationality of individual self-interest, market mechanisms, and democratic institutions. It may be hard to remember this today, amid the continuous explosions of anger and vengefulness in public life, but the compassionate imagination was indispensable to the political movements that emerged in the nineteenth century to address the mass suffering caused by radical social and economic shifts. As the experiences of dislocation and exploitation intensified, a variety of socialists, democrats, and reformers upheld fellow-feeling and solidarity, inciting the contempt of, among others, Friedrich Nietzsche, who claimed that the demand for social justice concealed the envy and resentment of the weak against their naturally aristocratic superiors. Our own deeply unequal and bitterly polarized societies, however, have fully validated Rousseau’s fear that people divided by extreme disparities would cease to feel compassion for another.

Human personality itself has been reorganized by the pressures of intensified competition. Narcissistic traits of self-preservation are heightened in individuals thrown into “a war of all against all, in which even the most intimate encounters become a form of mutual exploitation” as Christopher Lasch pointed out four decades ago. One result of mainstreaming a bleak survivalist ethic is that  “most people, as they grow up now,” the psychoanalyst Adam Phillips and the historian Barbara Taylor wrote in On Kindness, “secretly believe that kindness is a virtue of losers.” It may be that in societies reorganized according to the principles of a marketplace, where men and women find themselves newly defined as individual entrepreneurs, locked into competition with each other, frantically polishing their brands, while a tiny minority monopolizes political, financial, and cultural capital, the seas of pity can only ice over. We have certainly become too accustomed to hearing beneficiaries of the status quo deride compassion, despite its awful scarcity, as a “vice”—to use Jordan Peterson’s pejorative—and loudly execrate “social justice warriors” while presenting as immutable scientific fact the socially constructed hierarchy in which they are on top. Pseudo-Nietzschean dictates to toughen up, discard the language of victimhood, leave the injustices of history behind, and assume individual responsibility emerge from self-declared classical liberals, Enlightenment-mongers, free-speech ideologues, and celebrity-addled rappers alike. 

Such a society—individual project-driven and achievement-oriented—already enforces a numbing social isolation; it is aggravated today by the compulsion to constantly produce and transmit, as well as consume, opinion on digital media. The vying for attention and advantage amid storms of scandal and outrage further undermines the possibility of acknowledging our common vulnerability. With this prerequisite for compassion gone, what often prevails is the impulse to denounce and to ostracize, which, however justified, does not make for an understanding of the tangled roots of human suffering. It was hard, for instance, to read Junot Díaz’s account of being raped as an eight-year-old boy and not think of him as a victim, especially at the same time as being confronted with images of the eight-year-old girl who had been gang-raped and murdered in India. It then turned out that in his damaged life Díaz made some terrible choices, and that his confession of victimhood scants the experiences of those he victimized—among the innumerable many for whom sexual humiliation has been a commonplace and unspoken experience. But to abruptly turn him into an object of scorn on the grounds that he is an agent rather than a victim is to assume, wrongly, that human beings can only be one or the other.

Having internalized a proud American notion of agency, Monica Lewinsky held herself fully responsible for her actions in her affair with Bill Clinton, and some prominent feminists unkindly blamed her back in 1998. Today, she recognizes, after a long struggle with questions of agency and victimhood, Clinton’s “inappropriate abuse of authority, station, and privilege,” as well as her own responsibility. Our understanding of these matters is often shaped by prevailing moral prejudices; but it always helps to consider that, as Martha Nussbaum points out in Upheavals of Thought: The Intelligence of Emotions, “Agency and victimhood are not incompatible: indeed, only the capacity for agency makes victimhood tragic.” The vicissitudes of fortune—illness, accident, personal tragedy, political and economic shocks—can overwhelm anyone. They can damage character, yet not completely destroy it. And a true sense of tragedy “asks us,” Nussbaum writes, “to walk a delicate line. We are to acknowledge that life’s miseries strike deep, striking to the heart of human agency itself. And yet we are also to insist that they do not remove humanity, that the capacity for goodness remains when all else has been removed.”

Such a compassionate imagination does not refuse to assess individual culpability; it does not absolve offenders. Rather, it shows them mercy—an attitude that presupposes they have done wrong and must face the consequences, while acknowledging that their capacity for goodness has been diminished by the circumstances of life. It was this merciful vision, derived from a recognition of our common vulnerability, that Rahul Gandhi, after years of grief and righteous rage, expressed as he forgave his father’s killers. He may turn out to be another self-seeking dynast. But there is dignity in his dissent today from a worldwide culture of cruelty; and it is a rare reminder that many frozen seas of pity will have to melt before we regain a semblance of civil society.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/QR4GKsZpVL8/

The New Passport-Poor

Mondadori Portfolio via Getty ImagesHumphrey Bogart, Claude Rains, Paul Henreid, and Ingrid Bergman in Casablanca, 1942

In Casablanca, the ex-lovers Rick Blaine (Humphrey Bogart) and Ilsa Lund (Ingrid Bergman) reunite in the Moroccan port city where Ilsa and her husband, Victor Laszlo, have fled. Most people remember the movie as a story about love during wartime, and at first glance, it is: the pair ultimately renounce their love to help Laszlo, a Czech resistance leader, stick it to the Nazis. But the film’s entire plot—and, indeed, the very condition for Ilsa and Rick’s reunion—hinges on something much more mundane: Ilsa and Laszlo’s pursuit of travel documents. The papers themselves aren’t much to look at—just two folded sheets marked with an official’s signature—but in the film, as in real life, they can make the difference between life and death.

In the first half of the twentieth century, particularly during wars, many travelers in the West needed exit visas granting them the right to leave their country. And during World War II, Morocco, which was still a French protectorate when the movie takes place, became a stop on the refugee trail out of occupied Europe. Migrants traveled “from Paris to Marseille across the Mediterranean to Oran, then by train, or auto, or foot, across the rim of Africa to Casablanca,” the film’s narrator explains. There, they’d bribe an official, buy papers on the black market, or find some other way to procure exit documents, and wait for the next boat or plane to freedom. “The fortunate ones, through money, or influence, or luck, might obtain exit visas and scurry to Lisbon, and from Lisbon to the New World,” adds the narrator in an early scene. “But the others wait in Casablanca… and wait… and wait… and wait.” Rick’s Café is the gin joint where these characters congregated, commiserated, and languished: a veritable United Nations of champagne cocktails and gambling.

Casablanca is more than seventy-five years old. If released today, it would surely be criticized for its moralizing American nationalism, as well as for celebrating French colonial rule without featuring a single Moroccan protagonist. Read as a migration narrative, however, Casablanca reminds us that the identification papers we carry were created not to give us freedom but rather to curtail it. The right to mobility is granted not by the individual but by the state, and access to that right is dictated largely along class lines. The poor, unwanted abroad and unable to pay for the required visas, transit costs, and even basic documentation, stay trapped, while the rich can come and go as they please. In 2016, a record 82,000 millionaires moved to a new country thanks to immigration policies designed to attract the ultrarich, essentially by selling citizenship and residence permits. That year also, populist politicians around the world, from Austria to the Philippines, won over large numbers of voters by promising to keep the riff-raff out.

Passports, in other words, were invented not to let us roam freely, but to keep us in place—and in check. They represent the borders and boundaries countries draw around themselves, and the lines they draw around people, too. This is the case in wartime and in peace. While most countries no longer ask for Casablanca’s famous exit visas, all their elimination has done is remove a cudgel from the bureaucratic gauntlet. As barriers on people’s leaving fall away, blocks on their entering shoot up. And what is the use in leaving if you have nowhere to go?

If the passport served as a symbol of belonging to a sovereign nation, and, for the more fortunate, a way to travel outside it, not long from now the lines will be drawn around our bodies, rather than our countries. As printed papers and analogue technologies are giving way to intricate scans that can identify us by the patterns on our irises, the shape of our faces, and even maps of our veins and arteries, we no longer are our papers; rather, our papers become us.

The paradox of the passport is easy to forget in the West, since papers from North American and European countries grant citizens visa-free access, albeit temporarily, to almost anywhere they’d want to go. It’s not surprising, then, that when it comes to selling cars, credit cards, even mobile phone plans, the term “passport” is used as a stand-in for “freedom.” A German can visit 177 countries visa-free; an American, 173; an Afghan, just twenty-four.

Those of us who enjoy a degree of mobility only consider the converse—that without one, there is no way out—when the stakes are relatively low, if a passport is forgotten, lost, or misplaced. This trope is well-covered in the movies, too: the climax of Sex and the City 2 comes after Carrie Bradshaw leaves her passport at a shoe shop in Abu Dhabi; rushes to the souk with her friends to retrieve it; and, after scandalizing a throng of angry Arab men, is rescued by Emirati housewives dressed, beneath their abhayas, in haute couture.

The stakes for Carrie Bradshaw are pathetically trivial; she’ll have to rebook her trip, maybe fly coach, or spend an extra day dressed in conservative clothing. But the rest of the world’s predicament is closer to that of Ilsa Lund and Victor Laszlo—and without even their wealth and connections. Consider the persecuted, stateless Rohingya minority in Myanmar, or the millions of Syrians still living through a brutal civil war. They don’t have documents; or if they do, they don’t have the right kind. They can’t seem to get their hands on the papers they need to safely get where they’re going, so they resort to arduous, dangerous journeys over land and water. And if they don’t obtain a passport, a visa, or a document guaranteeing them safe passage, they face a long, long wait, the possibility of arrest, and, often, of death.


The adoption and standardization of travel documents on an international scale has as much to do with technology as it does with geopolitics. Until there were ways to move quickly over land and sea, it was easier to keep people in with walls, moats, fences, or coercion. But as transportation sped up and countries or empires became more interconnected by trade and by war, controls on the movement of people increased, too. It’s hard to know exactly who the first “passport” holder was, and where his or her document was issued, but according to John Torpey, a professor of sociology and history at CUNY’s Graduate Center and the author of The Invention of the Passport: Surveillance, Citizenship and the State (2000), there’s evidence that early identity controls were internal—that is, within a country, province, or empire. Under feudalism in Europe and Russia, serfs were bound to their masters’ estates; in sixteenth-century Prussia, a police edict was issued to prevent “vagrants” from obtaining “passes” to move to new towns and cities. The ability to move was, as always, tied largely to one’s socio-economic status, though efforts were made to keep the most skilled laborers (and their taxes) at home. An aristocrat with flat feet would have a much easier time traveling than a conscripted pauper.

The institutionalization of passports by the state became significant around the time of the French Revolution. Torpey notes that French revolutionaries objected vehemently to a decree from Louis XVI forbidding his subjects from leaving France without the right documents. After the revolution, they debated whether free men should have to carry passports at all. Some were in favor of the measure, reasoning that it was important for cohesion and security; others insisted that “a revolution that commenced with the destruction of passports must insure a sufficient measure of freedom to travel, even in crisis.”

Those in favor of papers prevailed. Over the next hundred years, empires rose and fell, armies and navies went to war, and conscription forced young men to register to fight, leaving an identification paper trail in their wake. Guards monitored borders and checkpoints judiciously to keep out spies and enemy foreigners during periods of conflict; immigration policies like the 1924 US Immigration Act placed limits on migration based on an applicant’s country of birth. In the wake of World War I, supranational bureaucracies like the League of Nations (later, the United Nations) standardized an international regime of travel documents, visas, and permits. The use of these papers developed in tandem with the rise of the nation-state and the establishment of physical, policed land borders whose existence we take for granted today. In Torpey’s words:

Modern states have frequently denied their citizens the right freely to travel abroad, and the capacity of states to deny untrammeled travel is effected by those states’ control over the distribution of passports and related documents, which have become essential prerequisites for admission into many countries.

As wars drew and redrew national borders and populations were displaced, erased, and exchanged, documents came to define a person’s place in the world. Newly created states—such as Austria, Hungary, Yugoslavia, and Czechoslovakia—began to print their own unique passports; these were a nation-building exercise, a diplomatic necessity, and a citizen’s proof of membership rolled into one. Citizens of the former Yugoslavia still express nostalgia for their old red passports, with which “you could travel anywhere,” in the words of one ex-hitchhiker.

But not everyone fit neatly onto these new maps: trapped in the middle were the stateless, who had no country and no papers, and exiles or refugees fleeing home with the wrong documents. Casablanca features a young Bulgarian woman ready to trade sex for a visa; the novelist Vladimir Nabokov paid a bribe (“administered to the right rat at the right office”) to obtain an exit visa for himself and his wife to come to the US. Stripped of his Russian citizenship, he traveled on a refugee passport issued by the United Nations. He hated it, writing in his memoir Speak, Memory that it was a “very inferior document of a sickly green hue.” Many others were not so lucky.


Just as technology contributed to the physical bordering of the nation-state with fences, walls, and checkpoints, so, too, does it shape the identification documents people carried to show the world where they belonged. Hand-scrawled scraps with brief physical descriptions evolved in the early twentieth century to include photographs, fingerprints, heights, hair, and eye colors. In the UK, entire families used to pose together; hats, props, and sunglasses were even accepted in the images until the 1920s. The US told people to stop smiling for the camera in the 1960s; in the 1970s, color photos replaced black and white ones. Forgeries and favors became somewhat harder to pull off, too. It’s one thing to buy a signed paper from a crooked—or is it benevolent?—official willing to help you out. It’s another to pass yourself off as someone else entirely.

Today, the passport’s days are rumored to be numbered. Airline executives and government officials predict that as soon as 2022, international travel will be “a smooth, tokenless process,” free of IDs or boarding cards, relying entirely on iris scans and fingerprints taken in a split second and vetted by a gigantic database of traveler information. With the rise of these biometric technologies against the backdrop of the war on terror and the resurgence of ethnic nationalism, we’re seeing walls—physical, legal, and rhetorical ones—being thrown up at every step. Physical walls have a symbolic part in the populist imagination, dividing “natives” from “others,” and beefed-up border controls, surveillance, and tracking technology create boundaries just as concrete in effect that politicians can crow about. Less noticed are the lines being drawn around people, delineations that will potentially follow them around for life.

Giuseppe Cacace/AFP/Getty ImagesPeople walking through a security tunnel demonstrating how travelers departing from Dubai will have their faces or irises scanned, at the Gitex 2017 exhibition at the Dubai World Trade Center, October 2017

The more information our fingerprints or irises immediately link to—such as where we live, what our occupation is, who our parents are, whether we rely on welfare, or if we’ve ever committed a crime—the more grounds there are for a kind of algorithmic segregation. Thanks to durable digital technologies like the blockchain, records will become indelible, for better or for worse; our histories could come back to haunt us decades after the fact of an arrest, a bankruptcy, or a deportation. In Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (2018), the political scientist Virginia Eubanks writes that data-driven welfare administration in the US ended up being a disaster because the technologies it used “are not neutral.” Rather, she argues, “They are shaped by our nation’s fear of economic insecurity and hatred of the poor; they in turn shape the politics and experience of poverty.” The “invasive electronic scrutiny” of the poor will soon be the status quo for all Americans, she notes. Already, an obvious target of biometric tracking will be the subjects of Trump’s promised “extreme vetting”: foreigners, refugees, and immigrants.

When the first of the current administration’s travel bans was announced in January 2017—the one that separated families, marooned long-time residents, and sowed chaos in airport terminals around the world—it was unclear whether the restrictions on travelers from nine Muslim-majority countries would also apply to dual citizens and permanent residents in the US from those countries. This group is a privileged minority, to be sure, and by no means the most immediately afflicted by the ban, but it raised a fundamental question: What determines where any of us is from? Is it the color of our passport, or the color of our skin? Is it where we’re born, or where we’ve mostly lived? In less abstract terms, would an Iranian Swede or a French Somali be forever simply Iranian or Somali in the eyes of the US agencies that control immigration and borders?

There was already some precedent for the ban: in 2015, during the Obama administration, Congress had voted in a law that required anyone with links to a country considered a “security risk” (such as Iran, Iraq, Syria, or Sudan), regardless of who they are or where they live, to obtain additional visas to come to the US, rather than simply enter on their other passports. The law still stands. Trump’s more extreme version along similar lines was ultimately scaled back—it doesn’t affect dual citizens, after all, and is facing challenges in the courts—but it did hint that in the future, the borders we’re born into could be impossible to escape. Visa or entry approvals are currently determined by passport stamps, entry records, cities of birth disclosed on some (but not all) national IDs. With more robust datasets and technologies, there will be less discretion: the denials would happen as a matter of course.

This has legal and political consequences, but also personal ones. The collection of biographic, biometric, familial, and even genetic information creates digital legacies that are hard to shake. In China, a country that still requires documents for internal travel, iris scanners, motion sensors, and other sinister-seeming technologies monitor its Muslim Uighur minority constantly. Chinese citizens generally are evaluated for visas, mortgages, schools, and employment by social credit scores. When today’s refugees follow Casablanca’s refugee trail in reverse and travel from Africa, across the Mediterranean, and into Europe, the authorities collect their biometrics and follow the Dublin protocol, whereby a migrant’s first port of entry is where he or she must apply for asylum. It’s getting harder and harder to disappear and start over. So much for mobility, be it physical, economic, or social.

Drawing borders around people might give us a more orderly and predictable world. But for all the promised benefits of a frictionless experience of journeying, it may not be a more humane one. Passports could well disappear in the next decade, but they’ll be replaced by something much more invasive: a digital shadow representing our bodies, our families, and our pasts, following us like little rainclouds everywhere we go.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/M20adhAiWYM/

The Afro-Pessimist Temptation

Amy Sherald/Private Collection/Monique Meloche Gallery, Chicago Amy Sherald: What’s precious inside of him does not care to be known by the mind in ways that diminish its presence (All American), 2017; from the exhibition ‘Amy Sherald,’ on view at the Contemporary Art Museum St. Louis, May 11–August 19, 2018

Not long ago in the locker room of my Harlem gym, I was the eavesdropping old head who thought Black Panther was another documentary about the militants of the Black Panther Party from the Sixties. I caught on from what the young white guy and the young black guy were talking about that Kendrick Lamar had written some of the film’s soundtrack. I almost said, “Lamar is woke,” but the memory of the first time I heard my father say a thing was “fly” rose up and shut my mouth.

In the current political backlash—the only notion the current administration has is to undo whatever President Obama did, to wipe him out—black America is nevertheless a cultural boomtown. My maternal cousins e-mailed everyone to go to Black Panther that first record-breaking weekend, like they were getting out the vote. Twenty-five years ago black people were the lost population, abandoned in inner cities overrun with drugs, exhorted by politicians and preachers to mend the broken black family. Black intellectuals were on the defensive, and bell hooks talked of the resentment she encountered from white people when she spoke of white supremacy instead of racism. Now white people are the ones who seem lost, who don’t seem to know who they are, except for those white Americans who join the resistance against white supremacy and make apologies to black friends for white privilege because, although they don’t know where else to begin, they do know that they don’t want to be associated anymore with the how-long-has-this-been-going-on.

For eight years, I didn’t care what right-wing white people had to say about anything. Obama’s presence on the international stage decriminalized at home the image of the black man; and the murdered black men around whom black women founded Black Lives Matter were regarded more as the fallen in battle than as victims. The vigils of Black Lives Matter drew strength from memories of the marches of the civil rights movement, just as the protesters of the 1960s were aware of the unfinished business of the Civil War as their moral inheritance. Obama’s presidency made black neoconservatives irrelevant. They fumed that on paper he should have added up to be one of them, but instead Obama paid homage to John Lewis. That was Eric Holder in the Justice Department. But as it turned out, not everyone was vibing with the triumphant celebrations at David Adjaye’s beautiful National Museum of African American History and Culture.

White supremacy isn’t back; it never went away, though we thought it had become marginal or been contained as a political force, and maybe it has, which only adds to the unhelpful feeling that this should not have happened, that the government has been hijacked. I think of the Harvard sociologist Lawrence Bobo in the election’s aftermath telling a meeting of the American Psychoanalytic Association that, had the same number of black people who voted in Milwaukee, Detroit, and Philadelphia in 2012 come to the polls in 2016, Hillary Clinton would have won in the Electoral College. What the 2016 presidential election demonstrated is that, as David Foster Wallace put it, there is no such thing as not voting.

I mind this happening when I am getting too old to run from it. Shit, do not hit that fan. My father’s siblings, in their late eighties and early nineties, assure me that we have survived worse. They grew up on Negro History Week. The Great Depression shaped their childhoods; McCarthyism their college years. My father lived to see Obama’s election in 2008, but not the gutting of the Voting Rights Act in 2013. He would have said that the struggle for freedom is ongoing. Look at how “they” managed to get around Brown v. Board of Education; look at Citizens United, he would say, he who hawked NAACP memberships in airport men’s rooms or read from William Julius Wilson at Christmas dinner. I longed for him to change the subject, to talk to my Jewish friends about science, not racism.

In 1895, the year Frederick Douglass died, Booker T. Washington gave an address in Atlanta cautioning black people to cast down their buckets where they were. The black and white races would be like the fingers of the hand, separate but working together on essential matters. White people took Washington to mean that blacks would accept Jim Crow and not agitate for restoration of the civil rights they had exercised during Reconstruction. They would concentrate instead on self-improvement and economic development. Washington’s conciliatory philosophy made his autobiography, Up from Slavery (1901), a best seller. He was hailed as the most influential black spokesman of his day. Theodore Roosevelt invited him to dine at the White House, much to the consternation of Washington’s white southern supporters.

Washington’s program may have won him admiration among whites, but he never persuaded black people, as far as an angry W.E.B. Du Bois was concerned. In The Souls of Black Folk (1903), Du Bois argued that the influence of three main attitudes could be traced throughout the history of black Americans in response to their condition:

a feeling of revolt and revenge; an attempt to adjust all thought and action to the will of the greater group; or, finally, a determined effort at self-realization and self-development despite environing opinion.

For Du Bois, Washington represented the attitude of submission. He had no trouble with Washington preaching thrift, patience, and industrial training for the masses, but to be silent in the face of injustice was not being a man:

Negroes must insist continually, in season and out of season, that voting is necessary to modern manhood, that color discrimination is barbarism, and that black boys need education as well as white boys.

Du Bois was not alone among black intellectuals in his condemnation of Washington, but it was not true that Washington had no black followers. For Washington, the withdrawal of black people from American political life was to be temporary. Black people would earn white respect by acquiring skills and becoming economically stable. If they couldn’t vote, then they could acquire property. However, Du Bois and his allies maintained that disenfranchisement was a significant obstacle to economic opportunity. Black prosperity was taken by whites as a form of being uppity: white people burned down the black business section of Tulsa, Oklahoma, in 1921, furious at its success. Moreover, black Marxist critics of the 1930s held that Washington’s program to produce craftsmen and laborers uninterested in unions had been made obsolete by the mass manufacturing economy. Washington’s Tuskegee Movement came to stand for backwater gradualism, of which the guesthouse for white visitors to the Tuskegee Institute was a symbol.

The Du Bois–Washington controversy described basic oppositions—North/South, urban/rural—that defined black America at the time. Identifying what Arnold Rampersad has called “an essential dualism in the black American soul,” Du Bois also explored the concept of “double-consciousness”:

One ever feels his two-ness—an American, a Negro; two souls, two thoughts, two unreconciled strivings; two warring ideals in one dark body.

The conflict between national and racial identity has had political expression—integrationist/separatist—as well as psychological meaning: good black/bad black, masked black self/real black self. “Free your mind and your ass will follow,” Funkadelic sang in 1970, by which time the authentic black was always assumed to be militant: there is a Malcolm X in every black person, the saying went.

Ta-Nehisi Coates says that he came to understand as a grown-up the limits of anger, but he is in a fed-up, secessionist mood by the end of We Were Eight Years in Power: An American Tragedy. His collection of eight essays on politics and black history written during Obama’s two terms of office, introduced with some new reflections, portrays his post-election disillusionment as a return to his senses. Coates wonders how he could have missed the signs of Trump’s coming: “His ideology is white supremacy in all of its truculent and sanctimonious power.” He strongly disagrees with those who say that racism is too simple an explanation for Trump’s victory. He was not put in office by “an inscrutable” white working class; he had the support of the white upper classes to which belong the very “pundits” who play down racism as an explanation.

The title We Were Eight Years in Power, Coates tells us, is taken from a speech that a South Carolina congressman made in 1895 when Reconstruction in the state was terminated by a white supremacist takeover. Du Bois noted at the time that what white South Carolina feared more than “bad Negro government” was “good Negro government.” Coates finds a parallel in Trump’s succeeding Obama, whose presidency was “a monument to moderation.” Obama’s victories were not racism’s defeat. He trusted white America and underestimated the opposition’s resolve to destroy him. Coates sees Obama as a caretaker, not a revolutionary, and even that was too much for white America. He writes from the perspective that that “end-of-history moment” when Obama was first elected “proved to be wrong.”

In the 1960s frustration with integration as the primary goal of civil rights began Booker T. Washington’s rehabilitation as an early advocate of black self-sufficiency. But it’s still a surprise to find him among Coates’s influences, to be back there again. It is because Coates at first identified with the conservative argument that blacks couldn’t blame all their problems on racism, that they had to take some responsibility for their social ills. He names Washington the father of a black conservative tradition that found “a permanent and natural home in the emerging ideology of Black Nationalism.” He writes, “The rise of the organic black conservative tradition is also a response to America’s retreat from its second attempt at Reconstruction.” As a young man in 1995, Coates experienced the Million Man March in Washington, D.C., at which the Nation of Islam’s Louis Farrakhan urged black men to be better fathers.

In their emphasis on defense of black communities against racist agents of the state, the Black Panthers in the 1960s considered themselves revolutionary; so, too, did the FBI, which destroyed the movement. Black nationalism wasn’t necessarily revolutionary: some leaders of the Republic of New Afrika endorsed Nixon in 1972 so that the commune might benefit from his Black Capitalism schemes. In the Reagan era, black conservatives complained that a collective black identity was a tyranny that sacrificed their individualism. What they were really attacking was the idea of black people as a voting bloc for the Democratic Party.

Black conservatism joined with white conservatism in opposing the use of government as the enforcement arm of change. Coates eventually gave up on movements that asked blacks to shape up, even though it gave him a politics “separate from the whims of white people.” What turned him off was that, historically, conservative black nationalism assumed that black people were broken and needed to be fixed, that “black culture in its present form is bastardized and pathological.”

Siegfried WoldhekTa-­Nehisi Coates

At every turn, Coates rejects interpretations of black culture as pathological. I am not broken. William Julius Wilson’s theories that link the deterioration of black material conditions to industrial decline “matched the facts of my life, black pathology matched none of it.” Coates holds the 1965 Moynihan Report on the black family accountable as a sexist document that has shaped policy on the mass incarceration of black men. He is done with what he might call the hypocrisy of white standards. “The essence of American racism is disrespect.” There is no such thing as assimilation. Having a father and adhering to middle-class norms have “never shielded black people from plunder.” American democracy is based on “plunder.”

The subject of reparations has been around in radical black politics for some time. But Coates takes the argument beyond the expected confines of slavery and applies the notion of plunder to whites’ relations with blacks in his history of red-lining and racial segregation as urban policy and real estate practice in postwar Chicago. He also cites the psychological and financial good that West Germany’s reparations meant for Israel: “What I’m talking about is a national reckoning that would lead to spiritual renewal.” Reparations are clearly the only solution for him, but he writes as though they will never be paid; therefore nothing else matters.

Between him and the other world, Du Bois said, was the unasked question of what it felt like to be a problem. But white people are the problem. The exclusion of black people transformed “whiteness itself into a monopoly on American possibilities,” Coates says. It used to be that social change for blacks meant concessions on the part of white people. But Coates is not looking for white allies or white sympathy. “Racism was banditry, pure and simple. And the banditry was not incidental to America, it was essential to it.” He has had it with “the great power of white innocence,” he writes. “Progressives are loath to invoke white supremacy as an explanation for anything.” The repeated use of the phrase “white supremacy” is itself a kind of provocation. “Gentrification is white supremacy.”

There may be white people who don’t believe the “comfortable” narratives about American history, but Coates hasn’t time for them either. The “evidence of structural inequality” may be “compelling,” but “the liberal notions that blacks are still, after a century of struggle, victims of pervasive discrimination is the ultimate buzzkill.” He means that the best-intentioned of whites still perceive being black as a social handicap. He wants to tell his son that black people are in charge of their own destinies, that their fates are not determined by the antagonism of others. “White supremacy is a crime and a lie, but it’s also a machine that generates meaning. This existential gift, as much as anything, is the source of its enormous, centuries-spanning power.” That rather makes it sound like hypnosis, but maybe the basic unit of white supremacy is the lynch mob.

Malcolm X thought Du Bois’s double-consciousness a matter for the black middle class—blacks living between two worlds, seeking the approval of both the white and the black and not getting either. But even when black people could see themselves for themselves, there was still the problem of whether white power could be reformed, overthrown, or escaped. The essential American soul is hard, isolate, stoic, and a killer, D.H. Lawrence said. If white supremacy is still the root of the social order in the US, then so, too, are the temptations of Hate, Despair, and Doubt, as Du Bois put it. “As we move into the mainstream,” Coates says, “black folks are taking a third road—being ourselves.”

It’s as though racism has always been the action and dealing with it the reaction. That is maybe why black thinkers and artists try to turn things around, to transcend race, to get out of white jurisdiction. When black students in the 1970s baited Ralph Ellison for his detachment from protest movements, he said that writing the best novel he could was his contribution to the struggle.

Cornel West blasted Coates for his narrow “defiance,” for choosing a “personal commitment to writing with no connection to collective action.”1 He argued that Coates makes a fetish of white supremacy and loses sight of the tradition of resistance. For West, Coates represents the “neoliberal” wing of the black freedom struggle, much like Obama himself. Obama is little more than a symbol to West (and Coates insists that symbols can mean a great deal). Coates’s position amounts to a misguided pessimism, in West’s view. Robin D.G. Kelley, author of the excellent Thelonious Monk: The Life and Times of an American Original (2009), attempted to mediate between their positions, saying, in part, that West and Coates share a pessimism of outlook and that black movements have always had a dual purpose: survival and ultimate victory.2

As a dustup encouraged by newspaper editors, West’s attack on Coates has been likened to the battle royal: that scene in Invisible Man where black youth are made to fight one another blindfolded in a ring for the amusement of white men. Richard Wright recounts in his autobiography, Black Boy, how he tried to get the other boy he was to oppose in just such an entertainment to stand with him and refuse to fight. Part of what drove Ellison was his need to one-up Wright, who got to use, in his work before Ellison, metaphors they both shared. But West, however ready he is to say impossible things before breakfast, is the older man, not Coates’s peer, which makes his name-calling—his contempt in the expression “neoliberal”—ineffectual purity.

In pre-Obama times, West warned black youth against the internal and external threats of nihilism. I remember one evening at Howard University in the early 1990s when he and bell hooks rocked the auditorium. I couldn’t hear what they were saying sometimes. But much of Coates’s audience wasn’t of reading age then.

The swagger of 1960s black militancy was absorbed into the rap music of the 1990s. In Democracy Matters: Winning the Fight Against Imperialism (2004), West interprets hip-hop culture as an indictment of the older generation, the lyrics of the young proclaiming that they were neglected by self-medicated adults: “Only their beloved mothers—often overworked, underpaid, and wrestling with a paucity of genuine intimacy—are spared.”

Coates is passionate about the music that helped him find himself and a language. His ambivalence about Obama goes away once he claims him as a member of hip-hop’s foundational generation. In his memoir Losing My Cool (2010), Thomas Chatterton Williams recalls that as a teenager immersed in hip-hop, it nagged at him that he and the other black students at his private school couldn’t say when Du Bois died or when King was born, but they were worked up over the anniversary of the assassination of Biggie Smalls. Coates is different from many other black writers of his generation in that he doesn’t come from a middle-class background. His biography is like a hip-hop story.

He grew up in “segregated West Baltimore,” where his father was chapter head of the Black Panther Party. He said he understood black as a culture, not as a minority, until he entered rooms where no one else looked like him. Early on in We Were Eight Years in Power he speaks of “the rage that lives in all African Americans, a collective feeling of disgrace that borders on self-hatred.” You wonder whom he’s speaking for, even as he goes on to say that music cured his generation’s shame, just as to embrace Malcolm X was to be relieved of “the mythical curse of Ham.” It’s been fifty years since Malcolm X talked about brainwashed Negroes becoming black people bragging about being black. It’s been half a century since those books that told us depression and grief among blacks were hatred turned on the black self.

Coates declares that when Obama first ran for president in 2008, the civil rights generation was

exiting the American stage—not in a haze of nostalgia but in a cloud of gloom, troubled by the persistence of racism, the apparent weaknesses of the generation following in its wake, and the seeming indifference of much of the country to black America’s fate.

Obama rose so quickly because African-Americans were

war-weary. It was not simply the country at large that was tired of the old baby boomer debates. Blacks, too, were sick of talking about affirmative action and school busing. There was a broad sense that integration had failed us.

Peril is generational, Coates says. He has given up on the liberal project, castigating liberal thinking for having “white honor” and the maintenance of “whiteness” at its core. King’s “gauzy all-inclusive” dream has been replaced by the reality of an America of competing groups, with blacks tired of being the weakest of the lot. Harold Cruse in The Crisis of the Negro Intellectual (1967), a vehement work of black nationalism and unique in black intellectual history, said flat out that Washington was right and that Du Bois had ended up on the wrong side, that Marxism was just white people (i.e., Jewish people) telling black people what to think. Cruse was regarded as a crank in his time, but his view of black history in America as a rigged competition is now widely shared, and Cruse was writing before Frantz Fanon’s work on the decolonized mind was available in English.

Afro-pessimism derives in part from Fanon, and maybe it’s another name for something that has been around in black culture for a while. Afro-pessimism found provocative expression in Incognegro: A Memoir of Exile and Apartheid (2008) by Frank B. Wilderson III. A Dartmouth graduate who grew up in the 1960s in the white Minneapolis suburb where Walter Mondale lived, Wilderson is West’s generation. He went to South Africa in the early 1990s and became involved with the revolutionary wing of the ANC that Mandela betrayed. White people are guilty until proven innocent, Wilderson asserts throughout. Fanon is everywhere these days, the way Malcolm X used to be, but Wilderson makes me think of Céline, not Fanon. Coates’s “critique of respectability politics” is in something of the same mood as Wilderson, and, before him, Cruse. He also has that echo of what Fanon called the rejection of neoliberal universalism.

The 1960s and 1970s showed that mass movements could bring about systemic change. Angela Davis said so.3 Unprecedented prosperity made the Great Society possible. But only black people could redefine black people, Stokeley Carmichael and Charles V. Hamilton said in Black Power (1967). West has remembered entering Harvard in 1970 and feeling more than prepared by his church and family. The future of the world as he could imagine it then and how it evidently strikes Coates these days is a profound generational difference. “The warlords of history are still kicking our heads in, and no one, not our fathers, not our Gods, is coming to save us.”

Cornell West is right or I am on his side, another old head who believes that history is human-made. Afro-pessimism and its treatment of withdrawal as transcendence is no less pleasing to white supremacy than Booker T. Washington’s strategic retreat into self-help. Afro-pessimism threatens no one, and white audiences confuse having been chastised with learning. Unfortunately, black people who dismiss the idea of progress as a fantasy are incorrect in thinking they are the same as most white people who perhaps believe still that they will be fine no matter who wins our elections. Afro-pessimism is not found in the black church. One of the most eloquent rebuttals to Afro-pessimism came from the white teenage anti-gun lobbyists who opened up their story in the March for Our Lives demonstrations to include all youth trapped in violent cultures.

My father used to say that integration had little to do with sitting next to white people and everything to do with black people gaining access to better neighborhoods, decent schools, their share. Life for blacks was not what it should be, but he saw that as a reason to keep on, not check out. I had no idea how much better things were than they had been when he was my age, he said. That white people spent money in order to suppress the black vote proved that voting was a radical act. Bobby Kennedy happened to be in Indianapolis the day Dr. King was assassinated fifty years ago. I always thought my father had gone downtown to hear Kennedy speak. No, he told me much later, he’d been in the ghetto tavern of a crony, too disgusted to talk. Yet he wouldn’t let me stay home from school the next day.

A couple of decades later I was resenting my father speaking of my expatriate life as a black literary tradition, because I understood him to be saying that I wasn’t doing anything new and, by the way, there was no such thing as getting away from being black, or what others might pretend that meant. Black life is about the group, and even if we tell ourselves that we don’t care anymore that America glorifies the individual in order to disguise what is really happening, this remains a fundamental paradox in the organization of everyday life for a black person. Your head is not a safe space.

  1. 1

    “Ta-Nehisi Coates Is the Neoliberal Face of the Black Freedom Struggle,” The Guardian, December 17, 2017. 

  2. 2

    “Coates and West in Jackson,” Boston Review, December 22, 2017. 

  3. 3

    Angela Y. Davis, Freedom Is a Constant Struggle (Haymarket, 2016). 

Source Article from http://feedproxy.google.com/~r/nybooks/~3/Msf-NwNg9ns/

Ratfucked Again

Bill Clark/CQ Roll Call/Getty ImagesAnti-gerrymandering activists in costume as Maryland district 5 (left) and district 1 (right) in front of the Supreme Court, March 2018

A decade ago, when the Republican Party was paying the price for the various cataclysms brought on by the George W. Bush presidency—the shockingly inadequate response to Hurricane Katrina, the ill effects of the Iraq War, the great economic meltdown—the Democratic Party reached its post–Great Society zenith. It nominated and elected the country’s first African-American president—and he won decisively, against an admired war hero. It sent sixty senators to Washington, which it hadn’t done in forty years (and back then, around a dozen of those were southern conservatives).1 It also sent 257 representatives to the House, its highest number since before the Gingrich Revolution of 1994. Its governors sat in twenty-eight executive mansions, including in such improbable states as Tennessee, Kansas, Oklahoma, and Wyoming.

Then came the rise of the Tea Party and the calamitous 2010 elections. The Republicans’ net gain of sixty-three seats in the House of Representatives, giving them control over that chamber after a four-year hiatus, swallowed most of the headlines (the party also had a net gain of six Senate seats). The Democrats, as President Obama put it, took a “shellacking.”

But perhaps the more consequential results happened in the states. Democrats lost a net total of four gubernatorial races, taking them down to a minority of twenty-two governorships. They lost gubernatorial contests in some important large states: Pennsylvania and Ohio; Michigan, where Governor Rick Snyder would make his fateful decision about the source of water for the city of Flint; and Wisconsin, where Scott Walker would pass anti-union legislation and steer state government hard to starboard. Florida, governed before that election by Charlie Crist, an independent who had left the GOP and criticized it as extremist, turned to the very conservative Republican Rick Scott. And all of those improbable states listed above eventually reverted to GOP control.

Democrats likewise took a pounding in state legislative races in 2010. Pennsylvania, Michigan, and Ohio had had divided legislatures before that election, and Wisconsin a Democratic one. All four went Republican. So did Maine, New Hampshire, North Carolina, Alabama, and Minnesota. Iowa, Louisiana, Colorado, and Oregon moved from Democratic control to having divided legislatures. In many of these states, the pendulum has never swung back, or it has swung more aggressively in the Republican direction, so that we now have, for example, thirty-three Republican governors and just sixteen Democratic ones, while Republicans maintain complete control of thirty-two state legislatures to the Democrats’ mere thirteen.

It was just one year, 2010, and one election. But it was a pivotal one, because it coincided with the decennial census and the drawing, in time for the 2012 elections, of new legislative districts at the federal and state levels. These newly empowered Republican governors and legislators found themselves with enormous power to reshape politics for a decade, and boy did they use it.

It cannot be said that what they did with their power stood flagrantly outside the tradition of American representative democracy, about which there is much to be ashamed—or at the very least, much of which fails to match the inspiring story we learned as schoolchildren. But it certainly can be said that these new Republican majorities—and a few Democratic ones, too, for example in Maryland—took partisan gerrymandering to new levels. And they did so immediately, so that in the 2012 elections, as the congressional voting analyst David Wasserman of the Cook Political Report found, Democratic candidates for the House of Representatives collectively won 1.37 million more votes than their Republican opponents, or 50.6 percent of the vote—but only 46 percent of the seats.2

As we head into this fall’s elections, the Democrats are expected to make big gains: most observers believe they’ll recapture the House, which they can do with a net gain of around twenty-four seats. That would effectively forestall President Trump’s enacting any sort of legislative agenda. Retaking the Senate—considered a tougher climb, but now thought possible by the experts in a way it was not a few months ago—would mean the Democrats could bottle up presidential nominations and even return the favor of what the Republicans did to Judge Merrick Garland in 2016 by blocking a nomination to the Supreme Court, should one open up.

But as the next census approaches, state executive mansions and legislatures are at least as important, as liberals have belatedly come to realize. The Democrats actually have two election cycles to see how much ground they can regain here, as new district lines won’t be drawn until after the 2020 election results are in. The party that wins the right to draw the legislative maps of the 2020s will have enormous power to shape future Congresses and state legislatures—to determine, for example, whether districts are drawn in such a way that Republicans need only worry about winning conservative votes and Democrats liberal ones, or in a way that might push candidates toward the center; and whether districts comply with the Voting Rights Act, in a decade when much demographic change is expected, enough to perhaps turn the crucial state of Texas at least purple, if not blue. Much is at stake.

The story of what the Republicans accomplished in 2010 is ably told by David Daley, the former editor of Salon, in his book Ratf**ked: The True Story Behind the Secret Plan to Steal America’s Democracy, which Elizabeth Drew reviewed favorably in these pages in 2016.3 In sum, the story starts in the summer of 2009, when Chris Jankowski, who worked for a group called the Republican State Leadership Committee, read a story in The New York Times emphasizing the importance of the 2010 elections. Like all Republican operatives, Jankowski was down in the dumps at the time. But reading that Times article gave him a sense of purpose and mission.

Jankowski grasped the connections immediately. Map-drawing is hugely important; state legislatures control map-drawing; many state legislatures are narrowly divided; many can therefore be “flipped” from one party to another with comparatively small amounts of money, far less than it would cost to flip a congressional seat. Jankowski quickly put together a plan named REDMAP (short for “Redistricting Majority Project”), which would help the Republican Party dominate politics for the decade to come. “Win big in 2010 and Republicans could redraw the maps and lock in electoral and financial advantages for the next ten years,” Daley writes. “Push just 20 [House] districts from competitive to safely Republican, and the GOP could save $100 million or more over the next decade.”

So Jankowski got his seed money and started setting up offices in the state capitals most important to the effort. Wind filled the project’s sails in the form of the crippled economy, which gave anti-Obama voters extra motivation to turn out that fall, and the January 2010 Citizens United Supreme Court decision, which opened the door for many millions of dollars of “dark money” (untraceable back to donors) to finance both individual campaigns and independent committees. REDMAP was off to the races.

Ratf**ked describes the striking results. In Wisconsin, Republicans went into the 2010 election with a 50–45 deficit in the state assembly and an 18–15 disadvantage in the Senate; they emerged with respective majorities of 60–38 and 19–14. In Michigan, Republicans already controlled the state senate. They maintained that control, and they flipped a twenty-three-seat deficit in the lower house to a sixteen-seat advantage. In North Carolina, a Democratic 30–20 advantage dissolved into a 31–19 Republican edge in the state senate; in the state house, the Republicans went from a 68–52 disadvantage to a 67–52 edge (with one independent). And so on, and on.

In every election, corners were cut, court precedents ignored, dirty deeds performed. In Pennsylvania, a thirteen-term Democratic state representative named David Levdansky was defeated because he allegedly voted for a “$600 million Arlen Specter Library.” Such allegations were made in ads paid for by the state Republican Party and the Republican State Leadership Committee. In fact, $600 million was the entire state budget, although even that was the initially appropriated figure; actual outlays, as Levdansky explains to Daley, typically come in lower. As for the amount of that total actually earmarked for the library in honor of the longtime senator, it was around $2 million. But by the time Levdansky got around to explaining all that, most voters had stopped listening.

That same fall in North Carolina, a Democrat named John Snow found himself the target of a mailing about a black felon named Henry Lee McCollum, who was serving time for the rape and murder of an eleven-year-old girl. “Thanks to arrogant State Senator John Snow,” it read, “McCollum could soon be let off of death row.” Snow lost. Four years later, McCollum, who has an IQ in the sixties, and his half brother were cleared of the crime on DNA evidence; Henry Lee had spent more than thirty years on death row.4

The tools of map-drawing began to grow more and more sophisticated in the 1980s, with the advent of computers. In one congressional district in Houston back then, two neighborhoods were united into the same district by inclusion of the Houston ship channel, where of course no actual voters lived. By now, districts can be drawn with such precision—including a specific census tract, excluding the one next door—that party registration of inhabitants can be calculated to the second or third decimal point. The result is districts that are so far removed from the “compact and contiguous” standard that courts have been known to apply that they become the butt of jokes. Pennsylvania’s current seventh congressional district, two blobs linked by a little strip of land that appears to be no more than a few miles wide, reminded one observer of nothing so much as “Donald Duck kicking Goofy.”

Through such techniques, the majority party can figure out ways to cram the voters of the minority party into as few districts as possible. Republicans in particular are assisted in this effort by the fact that Democrats and liberals tend to live in higher-density areas more often than Republicans and conservatives do. Hence, millions of Democrats are packed into comparatively fewer urban districts and suburban districts close to the city center, while Republicans are spread out over more districts. All this in turn means that Republicans can rack up impressive legislative majorities even as they are winning a minority of the vote.

This happened, as Daley documents, in state after state. In Wisconsin in 2012, for example, President Obama won 53 percent of the vote, and Democratic Senate candidate Tammy Baldwin won 51.4 percent. Democrats also won 50.4 percent of the aggregate vote for candidates for the House of Representatives, but Republicans took five of the state’s eight seats. In the state assembly, Democratic candidates overall received 174,000 more votes than GOP candidates, but Republicans won 60 percent of the seats.

A few rays of hope have recently emerged. Arizona is one of a handful of states (including California) that has turned over the drawing of legislative lines after the 2020 census to an independent commission. Such commissions will not be entirely free of politics, but they will surely be an improvement on legislators’ drawing districts for themselves and their friends.

Second, the courts have thrown out the egregious lines that Republicans drew in Pennsylvania, a state where Democrats outnumber Republicans, where until 2016 no Republican presidential candidate had won since 1988, where there had been twelve Democrats in the state’s House delegation to seven Republicans, but where after 2010 the congressional split went to 13–5 in the Republicans’ favor. The new map, which was drawn by the Pennsylvania Supreme Court and will be used this November, actually features districts that for the most part make some geographic sense and that most experts think will produce something more like an even split or a narrow Democratic advantage (which would reflect actual voter registration).5

In June, the Supreme Court is expected to rule on two more gerrymandering cases—one coming from Wisconsin, where Republicans drew egregious lines, and another from Maryland, where Democrats were the culprits. At issue is whether a Court majority will define discernible standards for what constitutes partisan gerrymandering. If it does so, a flood of gerrymandering litigation is likely to ensue, which reformers hope will lend momentum to the movement to take the process out of politicians’ hands once and for all.

In the beginning, the edict was simple. The fifty-five delegates to the 1787 Constitutional Convention agreed—under the leadership of a committee led, ironically enough, by Elbridge Gerry, who some years later as governor of Massachusetts would lend his name to the practice under discussion here—that each member of the new House of Representatives would represent around 40,000 people. Later—on the last day of the convention—they lowered the number to 30,000. The Constitution they approved provided that every ten years, a census would be taken, and the size of House districts and number of representatives adjusted accordingly.

A census was duly conducted every decade, and the populations of congressional districts increased by a few thousand each time—37,000 in 1800, 40,000 in 1810, and so on. But the various states’ commitments to drawing fair districts was, shall we say, indifferent. This was a problem that went back to the British Parliament. As boroughs were incorporated, they demanded representation, and they were given it; but no one had yet thought (say, in the 1600s) about the problem of equal representation. As such, both towns with only a few people and fast-growing cities sent two representatives to Parliament. Nothing was done, and by 1783, writes Rosemarie Zagarri in The Politics of Size, a Commons committee reported that a majority of the body’s members was elected by just 11,075 voters—a staggering 1/170th of the population.6

The United Kingdom fixed this “rotten borough” problem with the Reform Act of 1832. In the United States, however, the boroughs just got rottener and rottener over the course of the nineteenth century and well into the twentieth. As immigrants began to arrive, and after the slaves were freed, and then as African-Americans left the southern fields for the northern cities, few states made any effort whatsoever to draw fair congressional districts every ten years. Most continued to conduct a census; they then resolutely ignored the results, openly thumbing their noses at the Constitution. The motivation, of course, was to deny cities—with their populations of immigrants and, later, black people—their rightful representation.

Here are some numbers, from J. Douglas Smith’s eye-opening 2014 book On Democracy’s Doorstep.7 The inequities nearly defy belief. In Illinois after World War II, the populations of congressional districts ranged from 112,000 to 914,000. The larger district was urban, the smaller one rural, and the larger number meant of course that urban areas had fewer representatives, and that residents of the larger district had about one eighth the voice in Congress that residents of the smaller district had. In midcentury California, the 6,038,771 residents of Los Angeles County had one state senator, the same as the 14,294 inhabitants of three rural Sierra counties. As you might guess, the numbers in the South were appalling, disenfranchising what black voters did exist. But of all the states, the worst was Michigan, where rural voters and the legislative barons of Lansing lived in mortal fear of Detroiters having their rightful political say in the state’s affairs.

Hulton Archive/Getty ImagesElbridge Gerry, circa 1800: as governor of Massachusettshe became known for manipulating voting districts,a process now called “gerrymandering”

So it went, for 170 long years. How did such states get away with this? The courts would not enforce fair districts. Aggrieved citizens filed lawsuits, and courts looked at the numbers and said “you’re right”; but they would go on to aver that this was a political matter best settled through politics. The story Smith tells is the harrowing process by which these wrongs were finally put right in the early 1960s in two landmark Supreme Court decisions, Baker v. Carr and Reynolds v. Sims. In Baker (1962), which originated in Tennessee, the Court held that apportionment was a “justiciable” issue, i.e., one on which court intervention was appropriate. Two years later in Reynolds, which originated in Alabama, the Court held by 8–1 that all legislatures (except the United States Senate) had to meet the “one person, one vote” standard of representation, so that districts all had more or less equal numbers of voters.

It’s a riveting tale, involving Archibald Cox, later of Watergate fame but in the early 1960s President Kennedy’s solicitor general, urging caution, and Robert F. Kennedy pushing for more aggressive arguments before the Court. Earl Warren was asked numerous times to name the toughest case decided while he presided as chief justice. The man who oversaw decisions like Brown v. Board of Education and Miranda v. Arizona always answered “apportionment.”

Two titans squared off as Justice William O. Douglas emerged as the biggest champion on the Court of taking on the apportionment issue, and Felix Frankfurter its chief opponent (during Baker deliberations; by the time of Reynolds, Frankfurter was gone). Another justice, Charles Evans Whittaker, was so tormented by the Baker deliberations that he had a nervous breakdown and left the Court. A movement started immediately to call a constitutional convention to undo this judicial treachery and return to the states the right to treat legislative representation as they pleased, egged on by Senate Republican leader Everett Dirksen. Thirty-three states said yes—leaving the effort one state short of success.

This is the larger historical background against which recent Republican efforts need to be understood. The history of legislative system-rigging by rural, conservative interests is a long and ignoble one. For most of our history, our democracy has been, in Smith’s memorable phrase, a “deliberately misshapen enterprise.”

Democrats have now, for the first time in modern history, set up the machinery to try to do in 2020 what the GOP accomplished in 2010. The National Democratic Redistricting Committee was established last year and is headed by Eric Holder, the former attorney general. “We have to come up with a system that is more neutral, because the reality now is that we have politicians picking their voters as opposed to citizens choosing who their representatives are going to be,” Holder said at a Harvard Kennedy School forum on April 30.

His group raised nearly $11 million in its first six months and has placed a dozen states on its “target” list and another seven on its “watch” list. In most states, the group covets the governor’s mansion, for obvious reasons, and hopes to flip at least one house of the state legislature. In Minnesota, Wisconsin, North Carolina, and Ohio, it’s also eyeing down-ballot races. It appears to be most focused on Ohio, where the offices of secretary of state (which oversees elections) and state auditor are on its list.

If successful, the Holder group’s efforts—he is also considering running for president, by the way—will bear late fruit. In the meantime, to capture the two dozen seats they need to control the House, Democratic candidates this fall will need to win considerably more than 51 percent of the total vote. Wasserman of the Cook Political Report estimates that Democrats need to beat Republicans by 7 percent overall, which is in the vicinity of the party’s lead in most polls asking respondents whether they prefer that Democrats or Republicans win this fall. But the Brennan Center for Justice issued a report in late March saying that the number needed to win is more like 11 percent. The report assumes different overall Democratic vote margins and from there projects potential Democratic seat gains in the House based on historical totals and on Brennan’s own estimates taking gerrymandering into account.8

Notice that even according to the more optimistic (at least in the lower range) historical expectation numbers, the Democrats would need to win the national vote margin by 6 percent to win enough seats to retake the House. Doing that is a tall order. Even in 2012, their best year in recent times, they won by only about 2 percent overall.

Every other sign for the Democrats has been encouraging. Enthusiasm has been far greater among Democratic voters than among Republican ones. Even when Democratic candidates have lost, they’ve lost encouragingly. In a late April special congressional election in Arizona, the Democratic candidate came within five points of the Republican in a district that both Donald Trump in 2016 and Mitt Romney in 2012 carried by more than twenty points. After the results came in, Wasserman tweeted: “If the only data point you had to go on was last night’s #AZ08 result, you’d think a 30–40 seat Dem House gain in Nov. would be way low.”

So Democrats have many reasons to retain their optimism. But if they fall short, the reason may have less to do with Donald Trump than with Chris Jankowski and his work in 2010 that stands far less athwart American political history and tradition than we’d prefer to believe.

—May 10, 2018

  1. 1

    Technically, fifty-eight; but two independents, Bernie Sanders of Vermont and Angus King of Maine, caucused with the Democrats, giving them the crucial sixty votes needed to break a filibuster. 

  2. 2

    The Cook paper itself is behind a paywall, but the numbers can be found at W. Gardner Selby, “Republicans Won More House Seats Than More Popular Democrats, Though Not Entirely Because of How Districts Were Drawn,” Politifact.org, November 26, 2013. 

  3. 3

    “American Democracy Betrayed,” The New York Review, August 18, 2016. 

  4. 4

    See Mandy Locke and Joseph Neff, “Pardoned Brothers’ Payout Triggers Fight Over Who Gets a Cut,” The Charlotte Observer, May 1, 2017. 

  5. 5

    See Nate Cohn, Matthew Block, and Kevin Quealy, “The New Pennsylvania Congressional Map, District by District,” The New York Times, February 19, 2018.  

  6. 6

    The Politics of Size: Representation in the United States, 1776–1850 (Cornell University Press, 1987), p. 37. 

  7. 7

    On Democracy’s Doorstep: The Inside Story of How the Supreme Court Brought “One Person, One Vote” to the United States (Hill and Wang, 2014).  

  8. 8

    Laura Royden, Michael Li, and Yurij Rudensky, “Extreme Gerrymandering and the 2018 Midterm,” Brennan Center for Justice, March 23, 2018. 

Source Article from http://feedproxy.google.com/~r/nybooks/~3/qE-PFAqej24/

Remodeling Mayhem

The Image-Complex/Rafah: Black Friday/Forensic Architecture, 2015Photographs and videos are placed within a 3D model to tell the story of one of the heaviest days of bombardment in the 2014 Israel-Gaza conflict

About five miles north of the Israeli city of Beersheba, on the edge of the Negev desert, there’s a small village named Al-Araqib that has been demolished more than a hundred times in the last eighteen years. These demolitions have ranged from the total razing by over a thousand armed policemen with trucks and bulldozers to a simple flattening of a tent by a tractor.

In its heyday, the village had about 400 inhabitants, though now only a dozen or so remain, living within the limits of the village cemetery, next to the many graves. The cemetery affords the main claim of the Bedouin villagers to the land—if they can prove that they have cultivated it since before 1948, that the village has existed for longer than that, then the Israeli government will let them stay.

The plight of this village—shared by forty-six others that have been collectively dubbed “the battle of the Negev” by the Israeli media and establishment—is one of the many cases of state violence examined in “Counter Investigations,” an exhibition at the Institute of Contemporary Arts in London detailing the work of an investigative agency called Forensic Architecture. The group, founded by the Israeli architect Eyal Weizman in 2010 and based at Goldsmiths College in South London, seeks to use forensic methods of evidence-gathering and presentation against the nation states that developed them. Weizman believes that architects, who are skilled at computer modeling, presenting complex technical information to lay audiences, and coordinating projects made up of many different experts and specialists, are uniquely suited to this kind of investigation. But there’s another, simpler explanation for their involvement: “Most people dying in contemporary conflicts die in buildings.”

From missiles designed to pierce a hole in a roof before exploding inside a particular room, to army units blasting through the walls of houses, to the repeated demolitions of villages like Al-Araqib, conflict has increasingly acquired an architectural dimension. This development has prompted a wave of work by writers, artists, and academics such as Sharon Rotbard, Derek Gregory, Trevor Paglen, and Hito Steyerl focusing on the intersection between design, warfare, and the city. Steyerl, for example, writes in her recent book Duty Free Art that killing is a “matter of design” expressed through planning and policy; like Weizman she asks us not just to examine the moment a gun is fired, but also the ever-widening scope of circumstances and legal mechanisms that make such violence possible, even inevitable.

Forensic Architecture, 2016A composite image merging 3D modeling with news footage of a home destroyed in a drone strike on Miranshah, North Waziristan, Pakistan

Weizman studied at the Architectural Association in London, and published his first book in 2000, Yellow Rhythms: A Roundabout for London, an eccentric proposal for a vast roundabout straddling the Thames in southwest London, just north of Vauxhall. This was less a sincere attempt to solve traffic congestion and more an inventive thought experiment designed to reveal the absurdities and inequities of the London real estate market. Weizman imagined a state-owned, speculative development in the center of the roundabout—a set of empty skyscrapers accumulating value that would be skimmed off and used to fund progressive policies elsewhere. With this project Weizman embraced an attitude that had flourished at the AA and a number of the other innovative architecture schools since the Sixties: that architecture is a way of thinking about the world, of synthesizing and presenting knowledge, rather than just a way of designing and constructing buildings.

Within a few years, Weizman was able to put this approach to the test. Weizman, along with his colleague Rafi Segal, was selected to represent Israel at the World Congress of Architecture of Berlin in 2002. As part of the exhibit, the pair put together a catalogue of fourteen essays detailing the different ways in which the business and practice of architecture was part of Israeli strategy in the West Bank. The result, entitled A Civilian Occupation, was banned by the Israeli Association of United Architects, which ordered the pulping of the 5,000 printed copies (Segal and Weizman managed to save around 850, and the book was later re-released by Verso Books and the Israeli publisher Babel). Weizman built upon these ideas in his 2007 book Hollow Land, a sweeping investigation of Israeli policy from an architectural point of view that combines polemical force with minute, often surreal detail gleaned from interviews with Israeli military officers: “Derrida may be a little too opaque for our crowd,” says one, unexpectedly. “We share more with architects; we combine theory and practice. We can read, but we know as well how to build and destroy, and sometimes kill.”

Through this work Eyal Weizman, and later Forensic Architecture, has been involved in numerous court cases in Israel—most successfully as part of an action in the Israeli High Court designed to halt the construction of a section of separation wall in the village of Battir on environmental grounds. The agency provided models and animations demonstrating the environmental damage that would be caused by various army engineer proposals for a more “architecturally sustainable” and less “invasive” form of wall—leading to the idea of a wall in that area being abandoned altogether. Wider work by the agency has been used in trials from Guatemala to the International Criminal Court at the Hague, with similar aims—to use visualization and modeling to bring a complicated set of relationships vividly to life in the courtroom. This can have varying results, and their work is ignored or dismissed as often as it produces dramatic turnarounds. Battir might have been saved, but the future of the village of Al-Araqib, for example, is still precarious despite exhaustive historical research and a number of court sessions.

Although showcasing some of the agency’s work in Israel, where Weizman and his colleagues continue to collaborate with civil society groups and human rights organizations such as ActiveStills and B’Tselem, the main purpose of the London exhibition is to show how Forensic Architecture adapts its practice to provide a commentary on its immediate location. The show is squarely aimed at a British audience during a period of mounting hostility toward refugees and migrants. Immediately after the referendum to leave the European Union, for example, there was a documented spike in hate crimes in Britain, and they have yet to fall below pre-referendum levels. The exhibition has coincided with a long overdue public discussion about a policy once proudly described by Prime Minister Theresa May as the “hostile environment,” in which both non-citizens and British citizens of color have faced the same labyrinthine bureaucracy and a pervasive attitude of skepticism when trying to prove their right to be in the country. Preliminary figures suggest that thousands of citizens have suffered at the hands of this racist system in recent years, leading to destitution, lack of healthcare provision, and sometimes even deportation.

Forensic Architecture and Anderson Acoustics, 2017Simulated propagation of sound within a digital model of the internet café where Halit Yozgat, the son of Turkish immigrants, was murdered in Kassel, Germany, 2006

The show’s curators have subtly assembled parts of the exhibition to address this political background, in which issues surrounding race and immigration regularly dominate British media. The first project in the show uses videos, a vast timeline, and outlines traced on the gallery floor to reconstruct the murder of a Turkish man by a neo-Nazi in Germany, while another uses survivors’ testimonies to construct a harrowing model of a secret torture prison in Syria called Saydnaya. The latter project, perhaps their most famous to date, was carried out by Lawrence Abu Hamdan, a member of the group specializing in sound analysis. Through an immersive video shown at the exhibition, we see Hamdan working with survivors to reconstruct claustrophobic visual models of the interior of the prison, according to the sounds they heard while held there, or transported to and from torture cells, in near total darkness. Hamdan’s approach preserves gaps and errors in the memories of the survivors, and makes visual the trauma they experienced—corridors stretch to impossible lengths, while sinister locked doors multiply and spread around the viewer. These eerie plans, resembling a nightmarish memory palace, testify not just to the physical existence of the hidden prison, but also to the psychological consequences of what happened there, persistent in the minds of the survivors.

Beyond the film about Saydnaya, as the exhibition moves deeper into a cavernous darkened room, three video projections break down into traumatic detail the way that a lack of coordinated sea rescue services extracts a terrible death toll among migrants attempting to cross the Mediterranean: while different European authorities, NGO ships, and Libyan Coast guard vessels jostle chaotically, refugees drown, sometimes only feet away from the boats that are supposed to rescue them. (One such death is shown on film, captured by a camera fixed to the side of an NGO boat, and cannot be forgotten once seen.) According to Forensic Architecture’s Research Coordinator Christina Varvia, the purpose of these displays—from Germany, to Syria, to the Mediterranean Sea—is to viscerally impress upon a British audience the brutal reality of the refugee experience—whether it’s the violence they’re fleeing from, the dangerous, often deadly journey they face if they try to reach Europe (most don’t), or the racism they often encounter once they get there.

It is a testament to the group’s media fluency and inventive presentations that the sheer quantity of depressing, disturbing information on display in this exhibition rarely feels wearing or boring. For all its focus on precisely reconstructed factuality, many acutely emotional moments linger in the mind long after seeing the works. Whether or not this is “art” is a debate that Forensic Architecture long since left behind—the group is adept at presenting its findings in galleries and courts alike. (The art establishment is clearly not troubled by the question either: Forensic Architecture has been nominated for the prestigious Turner Prize.)

Why, when nation states have always committed crimes and lied about them, has an organization such as Forensic Architecture appeared only in the last decade? In Duty Free Art Hito Steyerl argues with apocalyptic brio that the world is entering a period of “post-democracy,” in which “states and other actors impose their agendas through emergency powers,” while democratic mandates weaken and “oligarchies of all kinds are on the rise.” Perhaps the group owes its existence to this new mood, in which authorities can no longer be trusted to handle the evidence, and impartiality seems increasingly impossible or, in Weizman’s opinion, even undesirable. “Having an axe to grind,” he writes, “should sharpen the quality of one’s research rather than blunt one’s claims.”

Ariel Caine/Forensic Architecture, 2016A diagram of a well belonging to Awimer Salman Abu Medigam, Al-Araqib, north of Beersheba, Negev desert, September 2016; blue rectangles indicate the positions of the individual image frames from which the 3D information was derived

“Counter Investigations: Forensic Architecture” was at London’s Institute of Contemporary Arts through May 13. Forensic Architecture: Violence at the Threshold of Detectability, by Eyal Weizman, is distributed by MIT PressDuty Free Art: Art in the Age of Planetary Civil War, by Hito Steyerl, is published by Verso.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/F7NsyizwpFw/

Devastatingly Human

Paula Rego/Marlborough International Fine ArtPaula Rego: The Family, 1988

The gripping and dramatic show “All Too Human: Bacon, Freud and a Century of Painting Life” merits its title: it is “all too human” in the tender, painful works that form its core. But “a century of painting life” promises something wider—does it smack of marketing, a lure to bring people in? In fact, the heart of the show is narrower and more interesting, illustrating the competing and overlapping streams of painterly obsession in London in the second half of the twentieth century. It shows us how, in their different ways, painters such as Francis Bacon and Lucian Freud, Leon Kossoff and Frank Auerbach, R.B. Kitaj, and Paula Rego redefined realism. In defiance of the dominant abstract trend, they teased and stretched the practice and impact of representational art. “What I want to do,” Francis Bacon said in 1966, “is to distort the thing far beyond the appearance, but in the distortion to bring it back to a recording of the appearance.” In this show, terms like “realism” and “human” take on new meaning and power.

Private collection, Switzerland c/o Di Donna Galleries Chaïm Soutine: The Butcher Stall, circa 1919; click to enlarge

The exhibition begins, cleverly, with some forebears of these London artists, pre-war painters who looked with intensity at the lives, settings, and landscapes that most affected them, and used paint in a highly personal way to convey not only what they saw, but what they felt. It feels odd, at first, to walk into a show that claims to be about “life” and find a landscape rather than a life-study, yet the urgent, textured use of paint in Chaïm Soutine’s earthy landscapes, as well as his distorted figures and the raw strength of The Butcher Stall (circa 1919), with its hanging carcasses, had a profound impact on Francis Bacon. In a similar way, all the works in this first room reach toward the future: Stanley Spencer’s portraits of his second wife, Patricia Preece, clothed and naked, stare out with the unpitying confidence of Lucian Freud’s early portraits. Sickert’s dark portrayals of London prostitutes—his attempt to give “the sensation of a page torn from the book of life”—anticipate the unsentimental nudes of Freud and Euan Uglow (no relation to me). David Bomberg’s layered, arid Spanish landscapes point toward the scumbled, perspectiveless scenes of Kossoff and Auerbach. 

Tate Francis Bacon: Dog, 1952; click to enlarge

Nothing, however, prepares one for the tender ferocity of Bacon’s isolated, entrapped figures. In the earliest of these, the large canvas of Figure in a Landscape (1945), a curled-up, almost human form appears to be submerged in a desert—we see his arm and part of his body, but the legs of his suit hang, empty, over a bench. This is masculinity destroyed. The sense of desperation is even stronger in Bacon’s paintings of animals, such as Dog (1952), in which the dog whirls like a dervish, absorbed in chasing its tail, while cars speed by on a palm-bordered freeway, or Study of a Baboon (1953), where the monkey flies and howls against the mesh of a fence. In their struggles, these animals are the fellows of Bacon’s “screaming popes”: in Study after Velazquez (1950), a businessman in a dark suit, jaws wrenched open in a silent yell, is trapped behind red bars that fall like a curtain of blood. The curators connect Bacon’s postwar angst with Giacometti’s elongated statues, isolated in space, and to the philosophy of existentialism. Yet Bacon’s vehement brushstrokes speak of energy and involvement, physical, not cerebral responses. In Study for Portrait II (after the Life Mask of William Blake) (1955), you feel the urgent vision behind the lidded eyes. He cares, passionately.

Celia Paul/Victoria Miro, London and VeniceCelia Paul: Painter and Model, 2012; click to enlarge

This was a postcolonial as well as a postwar world, a point made abruptly by devoting a room to the work of F.N. Souza, who came to London from Bombay  in 1949, and worked here until he left for New York in 1967. Despite Souza’s popularity at the time, and the range of sacred and profane references that link him uneasily to Bacon, his stark religious iconography feels out of keeping with the bodily compulsion of Bacon’s work and the new streams of influence shaping  what R.B. Kitaj named “the School of London.”

One of these streams flowed from the Slade, where William Coldstream was professor of Fine Art and the young Lucian Freud was a visiting tutor. Here, in a very different way to Bacon, you feel the pressure of flesh. Coldstream believed that artists should work without preconceptions, through minute, painstaking observation, fixing “reality” with measurement, allowing the subject to emerge slowly on the canvas. His Seated Nude (1952–1953) was painted over at least thirty sittings of about two hours each—no wonder the model looks glazed. His pupil Euan Uglow adopted this technique, setting his figures against a geometric grid. It gives them an eerie physicality. (I’m not the only person to stand in front of his 1953–1954 Woman with White Skirt and say, “Paula Rego.”) Uglow is famous for telling a model, “Nobody has ever looked at you as intensely as I have.” Over time, his control of detail and setting became obsessive, but his piercing gaze and careful technique remained, rendering his subjects at once solid and dreamlike, their inner spirit elusive but embodied.  

TateLucian Freud: Girl with a White Dog, 1950–1951

In the 1950s, Uglow’s belief in the value of minute observation was shared by Freud, who admitted, as Emma Chambers writes in the exhibition catalogue, to a “visual aggression” toward his sitters: “I would sit very close and stare. It could be uncomfortable for both of us.” His paintings from this period, delicately wrought with a fine sable brush, are almost hallucinatory in their detail, with a Pre-Raphaelite veracity of sheen and texture. We see the softness of material, the fur of the dog. And how exposed and alarmed his first wife, Kitty Garman, looks in the extraordinary Girl with a White Dog (1950–1951), in her pale green dressing gown with one white, veined breast revealed. 

Frank Auerbach/Marlborough Fine Art/TateFrank Auerbach: Head of E.O.W. I, 1960; click to enlarge

At the same time as Coldstream was instilling in his students the virtues of precision and measurement, David Bomberg was inspiring his pupils at the Borough Polytechnic in South London from 1946 to 1953 with a far freer, more tactile approach. To Bomberg, painting was about the “feeling” and experience of form, not its mere appearance. His own work conveyed the sense of mass in fluid, sensuous oils, and young artists such as Frank Auerbach, Dennis Creffield, Leon Kossoff, and Dorothy Mead flocked to his classes. Often working outdoors, as Bomberg did, Auerbach and Kossoff painted the settings they knew, showing a new London rising from the old, driving across the canvas in slabs of paint and thick encrustations. Auerbach’s Rebuilding the Empire Cinema, Leicester Square (1962) and Kossoff’s Building Site, Victoria Street (1961) are so tactile that they make you want to trace the lines with your hand, while the sticky ridges of Auerbach’s Head of E.O.W. I (1959–1960)—so strong from a distance, so baffling up close—seem as much sculpture as painting.   

Again and again in this exhibition, we move from the exchange of ideas and influences to the individual vision. Some works, indeed, are so drenched in emotion that they produce ripples of shock. The intimacy of Freud’s work is intensified when he moves, around 1960, from minute, close-up fidelity to large, expressive brushstrokes. In his later paintings, he catches the twist of muscles, the sweat on the skin, the pride and fullness of bodies in sleep, as in the great Leigh Bowery (1991), showing Bowery, a performance artist with a body of billowy corpulence,  with his head slumped gently on his shoulder, or in Sleeping by the Lion Carpet (1996), where Sue Tilley—“Big Sue,”  Bowery’s cashier at his Taboo night club and a benefits supervisor at the Charing Cross JobCentre—dozes safely before a predatory image.

Photo by Prudence Cuming Associates Ltd./The Estate of Francis Bacon/DACS, LondonFrancis Bacon: Study for Portrait of Lucian Freud, 1964; click to enlarge

By the 1960s, when Freud was subjecting his models to hours and days of sitting, Bacon was standing back, using photographs rather than live models. One room here shows a selection of portraits he commissioned from the photographer John Deakin. These are direct, intimate, and suggestive, but when Bacon explores the human form, the effect is very different. Bodies and heads become twisted, swollen, contorted. In his Study for a Portrait of P.L. (1962), painted in the year of Lacy’s death, after ten years of their turbulent, sometimes violent relationship, the internal and sexual organs seem to bulge through their covering. Two years later, his Study for Portrait of Lucian Freud emphasized the strong torso, the fierce expression, the unnerving clarity of Freud’s gaze. These are psychological as much as physical studies. In the moving Triptych (1974–1977), an unusual outdoor, light-filled work, the body beneath the umbrella writhes on the deserted beach, as Bacon mourns the death of his lover, George Dyer. But beyond them, the clear sky suggests a slow, painful coming to terms with loss—the promise of new life , or at least oblivion, in the deep blue of the sea beyond?

Bacon’s solitary figures are, paradoxically, imbued with a feeling of relationship. The same is true of Freud’s portraits, of his wife, his mother, his daughter, his friends. The intimacy of the family is also part of what it means to be “all too human.” Michael Andrews, for example, intrigued by Bacon’s use of photography, worked from a color photograph of a holiday in Scotland for his darkly beautiful Melanie and Me Swimming (1978–1979), spray-painted in acrylic. This feels like a moment swimming out of time into memory. And sometimes the sociability of London’s artistic life is itself  commemorated. In Colony Room I (1962), Andrews painted the Colony Club, where Bacon, Freud, and Deakin drank with Soho’s artists and writers. “Life,” in the sense of a community, also fills R.B. Kitaj’s brilliant group scenes, such as Cecil Court, London W.C.2. (The Refugees) (1983–1984). His crowded, colorful The Wedding (1989–1993) celebrates not only his marriage to Sandra Fisher but his friendships—with Auerbach, Freud, Kossoff, and David Hockney, among others.

The estate of R. B. Kitaj/TateR.B. Kitaj: Cecil Court, London W.C.2 (The Refugees), 1983–1984

Hockney’s work is inexplicably absent here, and so, up to this point in the show, are works by women, apart from a blurry, atmospheric nude by Dorothy Mead.  But suddenly, you turn a corner, and there is Paula Rego. The streams of the London School flow together. Rego came from Portugal when she was sixteen to finish her education, and from 1952 to 1956 she studied at the Slade under Coldstream, alongside Andrews, Uglow, and her future husband, Victor Willing. As Victor slowly declined from multiple sclerosis, her painting became increasingly personal. The Family (1988), painted in the last months of his life, shows two women helping him take off his jacket—yet there is a strange undertone here: they seem to be shuffling him into the grave. The feeling  is curiously sinister, perhaps reflecting Rego’s awareness that women—always the carers—are often so intimate with death. The little shrine in the background may show George slaying the dragon, but above him stands St. Joan, the martyred, martial saint.

Paula Rego/TatePaula Rego: The Betrothal: Lessons: The Shipwreck, after ‘Marriage a la Mode’ by Hogarth, 1999

Rego has often used stories to uncover the depths of our humanity, exposing the shattered dreams and desires of women across time. In Bride (1994), the bride lies back awkwardly, as if her wedding dress were a strait-jacket. In the trilogy The Betrothal: Lessons: The Shipwreck, after ‘Marriage a la Mode’ by Hogarth (1999), Hogarth’s moral tale of greed and disease, a mockery of the dream family, is reworked in the fashions of her own childhood.

By contrast, the final room—apart from Celia Paul’s Family Group (1984–1986) and the powerfully interior Painter and Model (2012)—feels like a token addition, a nervous nod to gender and diversity. Jenny Saville, Cecily Brown, and Lynette Yiadom-Boakye are fine artists, but they belong in a different narrative. A misstep, yet “All Too Human” remains an extraordinary exhibition, full of works of deep seriousness and bold, brave fidelity to life. For me, it ends with Rego’s bitingly honest work. With her bold, distinctive use of outline and color, and her mighty sympathy for human pain and longing, her paintings show life in all its senses.

Celia Paul/Victoria Miro, London and VeniceCelia Paul: Family Group, 1984–1986

“All Too Human: Bacon, Freud, and a Century of Painting Life” is at the Tate through August 27.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/MMm_30pw2iE/