Месечни архиви: June 2017

Tigers, Horses, and Stripes


Anton Kern Gallery, New York/Ellen BerkenblitEllen Berkenblit: Jonesy, 2017

In defiance and celebration of the earthen black that surrounds them, the vigorously delineated horses, flowers, hands, faces, stripes, a nude, and a foot found in Ellen Berkenblit’s striking new paintings at Anton Kern Gallery are a riot of luminous colors—violets, pinks, reds, oranges, greens, blues, and everything in between. The objects in the paintings have, according to the artist, been present in her imagery since her 1960s childhood in suburban New York, and are the stuff of mid-twentieth-century girlish fascination, mythical emblems amplified in memory.

As with Manet’s matadors or Matisse’s dancers—or more recently, the paintings of one of Berkenblit’s close contemporaries, Carroll Dunham, known for his idiosyncratic color palette, horses, and nudes—the things in Berkenblit’s paintings seem to be malleable and evocative receptacles of color and texture. The intertwining of image and technique has long been a concern of Berkenblit, who has been showing in New York for thirty years, and is, with Dunham, one of a handful of painters, including Dana Schutz, Jason Fox, and Amy Sillman, equally concerned with the possibilities of expressive figuration and virtuosic paint-handling.

Berkenblit began work on the paintings in the present exhibition last fall using her usual toolbox of small brushes, a palette knife, oil sticks, and a palette for color mixing. She employed two distinct modes of painting. In the first, she begins with charcoal on canvas. Then the paint comes in—defining areas, rendering forms, and bringing in bands of color that resemble fabric strips. In the second mode, Berkenblit, who has been a seamstress nearly all her life, sews together various fabrics, has the resultant quilt stretched and backed with cloth, and then coats it with PVC so that there’s a proper surface to take the paint. The quilt functions as a kind of under-drawing that she must respond to with bold lines and strips of color. It’s as though, she told me, she had written a poem and the words were thrown back at her with the command to write it anew.

In both modes she paints with the arc of her arm, wrist, and hand—a kind of active body calligraphy one might associate with post-war American abstraction. As the paintings progress over time, and the figures in them coalesce, both the colors and textures determine the formal composition. Berkenblit refers to her color mixing as a joyous, furious, faceted, and meditative process that results in small batches (the palette she uses is not large) that are always different. Tincture of Musk (2017) features a chalky violet for the Victorian cuffs encasing the wrist and hand that dominate the picture. The fingernails are aglow with orange, white, blue, and pink as they reach confidently into the darkness. The horses that stand relaxed and attentive in Lilac (2016) and Green Plume (2016) were initially painted with a mixture of various earthy browns—gritty, orange, and red. Then Berkenblit added transparent and extremely high-tinting paint that brightens the hue of the base color. This last touch—the turquoise—the artist referred to as her version of a perfumist’s “overdosing”—adding to a scent to push it toward a new and perhaps unexpected odor. At first glance, the horse is bay. Wait a second, blink, and there’s a turquoise haze. Then that new light is unavoidably present. In these pictures and others, that brown horse is ablaze against the encroaching black. Every painting in this exhibition features multiple shades and textures of black painted on after the figural elements, helping to build, erase, and ultimately define them. Berkenblit’s blacks are mottled with a palette knife—gently brushed on, dragged and rubbed.

Texture here serves both the painter, by giving her the pleasure of the touch, and the painting, by adding layers of meaning. In I Don’t Object If You Call Collect (2017) purple spreads over an embroidered fabric, highlighting the raised areas like a scientist brushing solution on a specimen. Each layer of paint reveals shapes and colors, both painted and sewn, as if simultaneously pre-existent and made anew. In other works, the layers within Berkenblit’s paintings seem to display the history of their own making. In Untitled (2017) a faint impression of a bow remains on the horse’s neck, a decision unmade and then left open; the horse doesn’t seem to mind as it stares at a horizontal swatch of violet. Berkenblit refuses to resolve her paintings: there is no perfection or sense of clarity here, and in this way she recalls not just her New York predecessors—the action figuration of Jim Dine, say, or the abstract excavations of Arshile Gorky—but Matisse’s scraped spaces in his 1910s paintings. Berkenblit’s refusal to neither thematically nor compositionally seal her pictures allows for many readings, and the pictorial subject matter—horses, tigers, nudes, hands —nods at myth-making. Her work is unironic, unembarrassed, and sincere about both form and content.

It would be foolish to ignore the timespan in which these paintings were made, but equally limiting to attach too much meaning to it. Untitled (2017) gives us a noble horse amidst flags. These are the colorfully emblazoned striped flags that can do double duty as the happy, celebratory signs of childhood parades and the dreadful symbols of today’s nationalism. Perhaps the most shocking and affecting painting in the exhibition is the one that gave Berkenblit the most trouble: V (2017). A woman strides through the black wearing only a dark brown velvet ribbon (composed of transparent reds and greens) around her neck. Her luminous skin tone is made up of various whites, raw sienna, purples, terre verte, and different transparent greens, among other pigments. A row of flowers stands beside her at diagonal attention. Her open hand tells us that she may pick one, or may have already. Her chin is just below the edge of the canvas. She is the most complete and thickly painted figure in the show, and appears uncannily strong. The painting is a stunning thing, alive like little else I’ve seen in recent times, and an act of implicit protest. As with the horses and hands, this woman is somehow moving through the black, hopeful as she evinces a boldness, beauty, and nobility.


Anton Kern Gallery, New York/Ellen BerkenblitEllen Berkenblit: V, 2017

Ellen Berkenblit’s paintings are on view at Anton Kern Gallery through July 6.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/9cnpgcbfAzc/

The Brave New World of Gene Editing


Graeme Mitchell/ReduxThe biochemist Jennifer Doudna, a pioneer of the technique of DNA modification known as CRISPR, at her lab at the University of California, Berkeley, 2015

In recent years, two new genetic technologies have started a scientific and medical revolution. One, relatively well known, is the ability to easily decode the information in our genes. The other, which is only dimly understood by the general public, is our newfound capacity to modify genes at will. These innovations give us the power to predict certain risks to our health, eliminate deadly diseases, and ultimately transform ourselves and the whole of nature. This development raises complex and urgent questions about the kind of society we want and who we really are. A brave new world is just around the corner, and we had better be ready for it or things could go horribly wrong.

The revolution began in benign but spectacular fashion. In June 2000, President Bill Clinton and Prime Minister Tony Blair announced the completion of the first draft of the human genome. According to a White House press statement, this achievement would “lead to new ways to prevent, diagnose, treat, and cure disease.” Many scientists were skeptical, but the public (who footed much of the $3 billion bill) probably found this highly practical justification more acceptable than the mere desire to know, which was in fact a large part of the motivation of many of the scientists involved.

During the 2000s, Clinton’s vision was slowly put into practice, beginning with the development of tests for genetic diseases. As these tests have become widespread, ethical concerns have begun to surface. Bonnie Rochman’s The Gene Machine shows how genetic testing is changing the lives of prospective parents and explores the dilemmas many people now face when deciding whether to have a child who might have a particular disease. Some of these technologies are relatively straightforward, such as the new blood test for Down syndrome or the Dor Yeshorim genetic database for Jews, which enables people to avoid partners with whom they might have a child affected by the lethal Tay-Sachs disease (particularly prevalent in Ashkenazis). But both of these apparently anodyne processes turn out to raise important ethical issues.

Whether we like it or not, the Dor Yeshorim database and other similar initiatives, such as genetic tests for sickle-cell anemia, which largely affects African-Americans, are enabling us to deliberately change the frequency of certain human genes in the population. This is the technical definition of eugenics and might seem shocking, since eugenics is forever associated with the forced sterilization of the mentally ill and Native Americans in the US or the murder of those deemed genetically defective by the Nazis. But the ability to use genetic testing when deciding whether or not to have children is clearly a form of soft eugenics, albeit one carried out voluntarily by those affected and clearly leading to a reduction of human suffering. With the best of intentions and, for the moment, the best of outcomes, we have drifted across a line in the sand.

The new genetic test for Down syndrome also hides ethical traps. The test detects tiny amounts of fetal DNA in the mother’s bloodstream, and in the US it has largely replaced the widespread use of invasive alternatives (amniocentesis or chorionic villous sampling, in which cells are taken from the placenta) that involve a risk of miscarriage. The advent of a safe way to detect Down is a positive development (in the UK it is predicted that the test will prevent up to thirty invasive test–induced miscarriages each year), but some women feel that its simplicity means they are being inadvertently pressured into having a test for Down, and potentially into having an abortion if the test result is positive.

It is extremely difficult to obtain reliable data on how often identification of Down syndrome in a fetus has led to a decision to terminate a pregnancy, but a recent study in Massachusetts suggested that prior to the introduction of the safer test in 2011, around 49 percent of such pregnancies were aborted. Since many parents opted not to have an invasive test for fear of miscarriage (in the UK the figure was around 40 percent), it is reasonable to expect that an increased rate of identification of fetuses with Down syndrome will lead to more abortions. This has led to criticism from families with Down syndrome children, who understandably want to emphasize the joy they feel living with a child who has the condition. Rochman navigates these difficult waters with skill and compassion, drawing on conversations with families and physicians and setting out the ethical challenges and the range of solutions adopted by different people, without being preachy or moralistic.

In the last few years, genetic testing has entered the commercial mainstream. Direct-to-consumer testing is now commonplace, performed by companies such as 23andMe (humans have twenty-three pairs of chromosomes). Much of the interest in such tests is based not only on the claim that they enable us to trace our ancestry, but also on the insight into our future health that they purport to provide. At the beginning of April, 23andMe received FDA approval to sell a do-it-yourself genetic test for ten diseases, including Parkinson’s and late-onset Alzheimer’s. You spit in a tube, send it off to the company, and after a few days you get your results. But as Steven Heine, a Canadian professor of social and cultural psychology who undertook several such tests on himself, explains in DNA Is Not Destiny, that is where the problems begin.

Some diseases are indeed entirely genetically determined—Huntington’s disease, Duchenne muscular dystrophy, and so on. If you have the faulty gene, you will eventually have the disease. Whether you want to be told by e-mail that you will develop a life-threatening disease is something you need to think hard about before doing the test. But for the vast majority of diseases, our future is not written in our genes, and the results of genetic tests can be misleading.

For example, Heine reveals that according to one test, he has “a 32 percent increased chance” of developing Parkinson’s disease. Behind this alarming figure lurks the reality that his risk is only slightly higher than the small likelihood that is found in the general population (2.1 percent for Heine, 1.6 percent for the rest of us). That does not sound quite so bad. Or does it? What does a risk of 2.1 percent really mean? People have a hard time interpreting this kind of information and deciding how to change their lifestyle to reduce their chance of getting the disease, if such an option is available. (It is not for Parkinson’s.)

Even more unhelpfully, different companies testing for the same disease can produce different results. Heine was told by one company that he had a higher-than-average risk of prostate cancer, Parkinson’s, melanoma, and various other diseases, whereas another said his risk for all these conditions was normal. These discrepancies can be explained by the different criteria and databases used by each testing company. Faced with varying estimates, the average customer might conclude that contradictory information is worse than no information at all. As Heine puts it, “The oracle’s crystal ball is made of mud.”

More troublingly still, however imperfect its predictive value, the tsunami of human genetic information now pouring from DNA sequencers all over the planet raises the possibility that our DNA could be used against us. The Genetic Information Nondiscrimination Act of 2008 made it illegal for US medical insurance companies to discriminate on the basis of genetic information (although strikingly not for life insurance or long-term care). However, the health care reform legislation recently passed by the House (the American Health Care Act, known as Trumpcare) allows insurers to charge higher premiums for people with a preexisting condition. It is hard to imagine anything more preexisting than a gene that could or, even worse, will lead to your getting a particular disease; and under such a health system, insurance companies would have every incentive to find out the risks present in your DNA. If this component of the Republican health care reform becomes law, the courts may conclude that a genetic test qualifies as proof of a preexisting condition. If genes end up affecting health insurance payments, some people might choose not to take these tests.

But of even greater practical and moral significance is the second part of the revolution in genetics: our ability to modify or “edit” the DNA sequences of humans and other creatures. This technique, known as CRISPR (pronounced “crisper”), was first applied to human cells in 2013, and has already radically changed research in the life sciences. It works in pretty much every species in which it has been tried and is currently undergoing its first clinical trials. HIV, leukemia, and sickle-cell anemia will probably soon be treated using CRISPR.

In A Crack in Creation, one of the pioneers of this technique, the biochemist Jennifer Doudna of the University of California at Berkeley, together with her onetime student Samuel Sternberg, describes the science behind CRISPR and the history of its discovery. This guidebook to the CRISPR revolution gives equal weight to the science of CRISPR and the profound ethical questions it raises. The book is required reading for every concerned citizen—the material it covers should be discussed in schools, colleges, and universities throughout the country. Community and patient groups need to understand the implications of this technology and help decide how it should and should not be applied, while politicians must confront the dramatic challenges posed by gene editing.

The story of CRISPR is a case study in how scientific inquiry that is purely driven by curiosity can lead to major advances. Beginning in the 1980s, scientists noticed that parts of the genomes of microbes contained regular DNA sequences that were repeated and consisted of approximate palindromes. (In fact, in general only a few motifs are roughly repeated within each “palindrome.”) Eventually, these sequences were given the snappy acronym CRISPR—clustered regularly interspersed short palindromic repeats. A hint about their function emerged when it became clear that the bits of DNA found in the spaces between the repeats—called spacer DNA—were not some random bacterial junk, but instead had come from viruses and had been integrated into the microbe’s genome.

These bits of DNA turned out to be very important in the life of the microbe. In 2002, scientists discovered that the CRISPR sequences activate a series of proteins—known as CRISPR-associated (or Cas) proteins—that can unravel and attack DNA. Then in 2007, it was shown that the CRISPR sequence and one particular protein (often referred to as CRISPR-Cas9) act together as a kind of immune system for microbes: if a particular virus’s DNA is incorporated into a microbe’s CRISPR sequences, the microbe can recognize an invasion by that virus and activate Cas proteins to snip it up.

This was a pretty big deal for microbiologists, but the excitement stems from the realization that the CRISPR-associated proteins could be used to alter any DNA to achieve a desired sequence. At the beginning in 2013, three groups of researchers, from the University of California at Berkeley (led by Jennifer Doudna), Harvard Medical School (led by George Church), and the Broad Institute of MIT and Harvard (led by Feng Zhang), independently showed that the CRISPR technique could be used to modify human cells. Gene editing was born.

The possibilities of CRISPR are immense. If you know a DNA sequence from a given organism, you can chop it up, delete it, and change it at will, much like what a word-processing program can do with texts. You can even use CRISPR to introduce additional control elements—for example to engineer a gene so that it is activated by light stimulation. In experimental organisms this can provide an extraordinary degree of control in studies of gene function, enabling scientists to explore the consequences of gene expression at a particular moment in the organism’s life or in a particular environment.

There appear to be few limits to how CRISPR might be used. One is technical: it can be difficult to deliver the specially constructed CRISPR DNA sequences to specific cells in order to change their genes. But a larger and more intractable concern is ethical: Where and when should this technology be used? In 2016, the power of gene editing and the relative ease of its application led James Clapper, President Obama’s director of national intelligence, to describe CRISPR as a weapon of mass destruction. Well-meaning biohackers are already selling kits over the Internet that enable anyone with high school biology to edit the genes of bacteria. The plotline of a techno-thriller may be writing itself in real time.

A Crack in Creation inevitably focuses on Doudna’s work, providing insight into her own feelings as the implications of CRISPR slowly dawned on her and her principal collaborator, the French scientist Emmanuelle Charpentier. However, the book also describes the work of the many laboratories around the world that contributed to the breakthrough. This evenhanded approach contrasts with an article on the history of CRISPR written for Cell by the molecular biologist Eric Lander of the Broad Institute. Lander’s article was widely seen as unfairly emphasizing the work of the Harvard researchers Zhang and Church and downplaying the contribution of Doudna and Charpentier.* These contesting histories seek to influence not only who will get what seems like an inevitable Nobel Prize for the discovery, but above all the fortune that can be made, for individuals and institutions, from the patents to CRISPR applications.


Anthony A. James/UC IrvineAdult female Anopheles stephensi mosquitoes, important malaria carriers in urban India, transformed in genetic experiments to study whether they can be made inhospitable to malaria parasites

Frustratingly, Doudna and Sternberg say little about the patent issue, which is currently the focus of a complex legal case between the University of California and the Broad Institute over which group of researchers can rightfully license CRISPR-Cas9. In February, the US Patent Trial and Appeal Board ruled in favor of the Broad Institute, supporting its patent for the use of CRISPR-Cas9 in eukaryotic cells (including humans). The Berkeley team, on the other hand, had previously filed patents on the use of CRISPR-Cas9 in any cell, which, if supported by the courts, would mean that any researcher wishing to use the technology would have to get licenses from both Berkeley and the Broad Institute. The problem—apart from the obvious fact that the main beneficiaries of the US Patent Board’s decision will be lawyers, not scientists, and certainly not patients—is that the outcome may limit scientific inquiry by imposing fees for using CRISPR technology. More fundamentally, it can be argued that it is inherently wrong to patent discoveries made through publicly-funded research.

The story is far from over. The Berkeley team is appealing the initial decision; patents in other areas of the world, including Europe, have yet to be decided; other institutions have also filed patents that have yet to be examined in court; and the use of alternative enzymes that are more efficient than Cas9 may render the whole process moot. Initially, the Berkeley and Broad teams were working together on the commercialization of the technology, but something broke down in their relationship, and the current patent dispute is the consequence. What caused that rupture has not been made public, and Doudna and Sternberg give no hints.

The second half of A Crack in Creation deals with the profound ethical issues that are raised by gene editing. These pages are not dry or abstract—Doudna uses her own shifting positions on these questions as a way for the reader to explore different possibilities. However, she often offers no clear way forward, beyond the fairly obvious warning that we need to be careful. For example, Doudna was initially deeply opposed to any manipulation of the human genome that could be inherited by future generations—this is called germline manipulation, and is carried out on eggs or sperm, or on a single-cell embryo. (Genetic changes produced by all currently envisaged human uses of CRISPR, for example on blood cells, would not be passed to the patient’s children because these cells are not passed on.)

Although laws and guidelines differ among countries, for the moment implantation of genetically edited embryos is generally considered to be wrong, and in 2015 a nonbinding international moratorium on the manipulation of the human germline was reached at a meeting held in Washington by the National Academy of Sciences, the Institute of Medicine, the Royal Society of London, and the Chinese Academy of Sciences. Yet it seems inevitable that the world’s first CRISPR baby will be born sometime in the next decade, most likely as a result of a procedure that is intended to permanently remove genes that cause a particular disease.

Already in the early days of her research, Doudna seems to have been haunted by the implications of her work—she describes a disturbing dream in which Hitler keenly asked her to explain the technique to him. Over the last couple of years, following meetings with patients suffering from genetic diseases, Doudna has shifted her position, and now feels that it would be unethical to legally forbid a family to, say, remove a defective portion of the gene that causes Huntington’s disease from an embryo, which otherwise would grow into an adult doomed to a horrible death.

Like many scientists and the vast majority of the general public, Doudna remains hostile to changing the germline in an attempt to make humans smarter, more beautiful, or stronger, but she recognizes that it is extremely difficult to draw a line between remedial action and enhancement. Reassuringly, both A Crack in Creation and DNA Is Not Destiny show that these eugenic fantasies will not succeed—such characteristics are highly complex, and to the extent that they have a genetic component, it is encoded by a large number of genes each of which has a very small effect, and which interact in unknown ways. We are not on the verge of the creation of a CRISPR master race.

Nevertheless, Doudna does accept that there is a danger that the new technology will “transcribe our societies’ financial inequality into our genetic code,” as the rich will be able to use it to enhance their offspring while the poor will not. Unfortunately, her only solution is to suggest that we should start planning for international guidelines governing germline gene editing, with researchers and lawmakers (the public are not mentioned) encouraged to find “the right balance between regulation and freedom.”

The failure to resolve the issue of how to regulate gene-editing technology is even more striking when Doudna and Sternberg describe what they acknowledge is the most dangerous potential application of their technique: the deployment of what are known as gene drives, especially in species with short generation times, such as insect pests. Gene drives are artificial bits of DNA that rapidly spread through the population, unlike existing GMO techniques in which modified genes spread at a very slow rate and easily disappear from the gene pool. When a gene drive is used, the frequency of the altered gene increases exponentially with each generation, rapidly flooding the whole population. This is the technology that scientists have been proposing as a way of rendering all mosquitoes sterile or preventing them from carrying malaria, and it could clearly have an enormous effect on the epidemiology of some of the most deadly diseases. Over 300,000 children die each year of malaria; CRISPR gene drives could potentially save them by altering the mosquito’s genome.

The problem with a gene drive is that it is essentially a biological bomb that could have all sorts of unintended consequences. If we make the mosquito inhospitable to the malaria parasite, we might find that, just as with the overuse of antibiotics, the parasite mutates in such a way that it can evade the effects of the gene drive; this change could also mean that it is immune to our current antimalarial drugs. Meanwhile, the alternative approach of eradicating the mosquito from a particular environment, as Doudna and Sternberg point out, may lead to unexpected changes in the ecology of the region—we simply do not know enough about ecology to be able to predict what will happen.

Claims that a gene drive that goes wrong could be reengineered (this is facilely called “undo” by its advocates) ignore the fact that other species might have been irreversibly damaged by the initial genetic change. Ecosystems are fragile. A vaccine against malaria might eventually become an ecologically safe alternative, but the advocates of gene drives understandably argue that if we carry on with our current approach, using insecticides and bed nets, malaria will continue to kill those hundreds of thousands of children each year, together with thousands more who are infected with other mosquito-borne diseases, such as Zika, dengue, West Nile virus, and chikungunya.

At the moment, there are no regulations governing if and how gene drive technology should be deployed. Part of the problem is that this is effectively a global question—insects travel easily, and they and the diseases they transmit can mutate as they go. An apparent solution in one part of the world might turn into a catastrophe in another, as manipulated insects and pathogens move unhindered across frontiers and enter new ecosystems. Global regulation of gene drives—much as we have global regulation of other potentially dangerous technologies such as civilian air travel or nuclear power—is crucial, but many governments, and especially the current US administration, have little appetite for international regulation.

Whether these developments excite us or appall us, we cannot unlearn what we have discovered. CRISPR is already speeding up scientific discovery, making it possible to manipulate genes in organisms and providing stunning insights into evolution, such as last year’s study by Neil Shubin at the University of Chicago that explored how fish fins were replaced by feet in land vertebrates nearly 400 million years ago. CRISPR will soon be applied to health care, making some previously lethal or debilitating diseases a thing of the past. Not all diseases will be easily cured—for example, the development of a cure for Duchenne muscular dystrophy is likely to be hindered for many years by technical difficulties associated with the delivery of CRISPR sequences to all the affected muscle cells—but we truly are emerging into a new world.

To prevent gene editing from taking a dystopian turn, strict regulation through internationally recognized guidelines must be found to protect our genetic information from unscrupulous states or commercial exploitation, prevent the irresponsible release of gene drives, and prohibit any form of discrimination against people because of their genes. Hostility to such discrimination should become a basic moral principle shared by societies around the world. The first step toward such an outcome is to ensure that the public and lawmakers understand the new technology and its dramatic implications. A Crack in Creation—the first book on CRISPR to present a powerful mix of science and ethics—can help in this process. As Francis Bacon said, knowledge is power.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/CngUOC9AVDE/

Myth-Maker of the Brothel


Freer|Sackler, Smithsonian/Charles Lang FreerUtamaro: Moon at Shinagawa (detail), 1788-1790

Of all the masters of the woodblock print in the Edo Period, Utamaro has the most colorful reputation. Hokusai was perhaps the greatest draughtsman, Hiroshige excelled in landscapes, and Kuniyoshi had the wildest theatrical flair. Utamaro (1753-1806) was the lover of women.

Not only did he create extraordinary prints and paintings of female beauties, often high-class prostitutes, but he was also, it was said, a great habitué of the brothels in Edo himself. Prostitutes, even at the top end of the market, no longer have any of the glamor associated with their trade in eighteenth-century Japan, but “Utamaro” is the name of a large number of massage parlors that still dot the areas where famous pleasure districts once used to be. Even in Utamaro’s time, the glamor of prostitutes was largely a fantasy promoted in guidebooks and prints. He made a living providing pictures of the “floating world” of commercial sex, commissioned by publishers who were paid by the brothel owners.

Three remarkable paintings by Utamaro set in different red light districts in Edo are the main attraction of “Inventing Utamaro: A Japanese Masterpiece Rediscovered,” a fascinating exhibition at the Sackler Gallery in Washington, D.C. The last time all three were seen together was in the late 1880s in Paris. The Japanese dealer Hayashi Tadamasa kept the earliest (between 1780 and 1790) and best one for himself. It is called Moon at Shinagawa (1788-1790), and shows an elegant teahouse with a view of the sea. A number of finely dressed “courtesans” are seen playing musical instruments, reading poems, and bringing out dainty dishes. This painting was acquired by Charles Lang Freer in 1903 and is now part of the Freer/Sackler collection.


Freer|Sackler, Smithsonian/Wadsworth AtheneumUtamaro: Cherry Blossoms in Yoshiwara,1792-1794; click to enlarge

Cherry Blossoms in Yoshiwara (1792-1794), a gaudier picture of women singing and dancing in a typical teahouse/brothel with cherry blossom trees in full bloom outside, was sold to the Wadsworth Atheneum in Hartford, Connecticut, in the 1950s. But the whereabouts of the third picture was a mystery until it suddenly turned up at the Okada Museum of Art in Hakone, Japan, in 2014. Snow at Fukagawa (1802-1806), a little clumsily touched up recently by a Chinese restorer, again shows a tableau of women engaged in various activities—playing the three-stringed samisen, carrying bedding, drinking—associated with a house of pleasure.

In all three pictures, there is an almost total absence of men. These are women on display for the eyes of men, no doubt, advertisements for the sexual trade that played such an important part in the merchant culture of the Edo Period (1603-1868). Politically oppressive, the authorities nonetheless gave license to men to indulge themselves in amusements of varying degrees of sophistication acted out in a narrow and interconnected world of brothels and Kabuki theaters. Sex, kept in bounds by rules of social etiquette, was less threatening to the authorities than political activity. (Utamaro was arrested once, not for his pornographic prints, but for depicting samurai grandees, which was forbidden.) And the roles played by the women in this world, especially the high-class ones, were hardly less stylized and artificial than those performed at the Kabuki.

Utamaro’s personal reputation as a ladies’ man may be as imaginary as the sexual games acted out in the brothels. Very little is known about his life. It is known that he trained as an apprentice to an artist named Toriyama Sekien, who switched from the austere art of the Kano School to making prints of ogres and other fantastical figures in illustrated books.


Freer|Sackler, Smithsonian/Okada Museum of Art, Hakone Utamaro: Snow at Fukagawa, 1802-1806; click to enlarge

The legend of Utamaro as a demon of art, as well as an erotic connoisseur, began early on, but was later burnished in a movie by the great director Mizoguchi Kenji, entitled Utamaro and His Five Women (1946), which was based on a novel of the same title. The portrayal of the artist probably owes more to the way Mizoguchi saw himself than to historical accuracy.

The exotic image of traditional Japan as a kind of paradise of sexual refinement, which was already the product of a fantasy world promoted by artists like Utamaro, appealed to sophisticated collectors, writers, and artists in late-nineteenth-century Paris. The pleasure world of the Edo Period was seen as an elegant and sensuous antidote to the ugliness of the industrial age. And the same was true in Japan.

At first, in the last decades of the nineteenth century, when the Japanese were eager to modernize along Western lines, the hedonism of Floating World prints, and the wilder shores of Kabuki, were considered rather shameful. Soon, however, the popular theatrical genres and sensual entertainments of the past calcified in the culture of geisha and in classical Japanese theater, shorn of its wild inventiveness. But the art of Utamaro still retains the old spirit, which now evokes feelings of nostalgia.


Freer|Sackler, Smithsonian/Charles Lang FreerUtamaro: Moon at Shinagawa, 1788-1790; click to enlarge

The three paintings at the Sackler are unsigned and their provenance is cloudy. They may not be entirely the work of Utamaro. Some experts even claim that one or two of them are not by Utamaro at all. Another mystery lies in their odd sizes, much too big to be hung in a traditional Japanese alcove, or even on the walls of a Japanese house. Yet the subject matter would seem rather unsuitable for display in a temple. Many a fake Utamaro was made for the Western market, hungry for Japanese exotica. But these pictures seem too fine for that.

There are other items in the Sackler show that are well worth studying, especially a number of beautiful prints and illustrated books by Utamaro and others. At the very end of the exhibition there is a large color photograph of a brothel in Tokyo, probably taken at the end of the nineteenth century. We see several rows of what look like very young girls waiting behind wooden bars to be selected by clients passing by. They were virtually enslaved by their employers. Most died of disease in their twenties. It is a reminder that the highest artistic achievements sometimes emerge from the most squalid circumstances.


Honolulu Museum of Art, Gift of James H. Soong, 2012Kusakabe Kimber: Yoshiwara Girls, 1890s

“Inventing Utamaro: A Japanese Masterpiece Rediscovered” is at the Sackler Gallery through July 9.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/QTM3wWXT84I/

How Far Will the Court Go?


Jonathan Ernst/ReutersFrom top left: Justice Elena Kagan, Justice Samuel Alito, Justice Sonia Sotomayor, Justice Neil Gorsuch, Justice Ruth Bader Ginsburg, Justice Anthony Kennedy, Chief Justice John Roberts, Justice Clarence Thomas, Justice Stephen Breyer, Washington, D.C., June 1, 2017

The 2016-2017 term, which concluded on Monday, opened with eight justices and every expectation that, after Hillary Clinton was elected, the Court’s balance would soon tilt liberal for the first time in four decades. Then Donald Trump won, Neil Gorsuch was appointed to fill the late Justice Antonin Scalia’s seat, and the Court once again had a five-member conservative majority. The Court had fewer headline-grabbing cases this term than in prior years, but it nonetheless decided several important cases—certainly enough for Gorsuch to show his colors, which thus far are deep red.  As Adam Liptak of The New York Times has noted, the Court was more united than ever this term, largely because, with eight justices for much of the time, it strove to achieve consensus by deciding cases narrowly. On constitutional matters, it was especially united in defense of First Amendment speech rights. But other issues continued to spark controversy—including state support of religion and the availability of damages for federal officials’ violations of basic constitutional rights. 

The Court decided two important speech cases. In Matal v. Tam, it struck down a federal law denying registration to trademarks that “disparage” individuals or groups. The challenge was brought by an Asian-American rock band that took the name “The Slants” as a way of reappropriating a racial and ethnic slur. But the Patent and Trademark Office used the same law to deny a trademark to the Washington Redskins. 

In language that seemed directed as much at campus speech controversies as at the current case, Justice Samuel Alito wrote for the majority: “Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express ‘the thought that we hate.’” The Court has long held that the fact that speech offends is a reason to protect it, not to suppress it. In some sense, then, the Court’s unanimity is less surprising than the fact that the federal law it struck down had remained on the books for seventy-one years. 

In a second speech case, Packingham v. North Carolina, the Court was again unanimous, striking down a state law that made it a crime for individuals who had once been convicted of a sex offense to access Internet sites that permit children to become members or to create personal web pages. In 2002, Lester Packingham, then twenty-one years old, pleaded guilty to having sex with a thirteen-year-old girl. That made him a sex offender under North Carolina law. In 2010, when a traffic court dismissed a vehicle citation against him, Packingham posted a message on Facebook stating, “Praise be to GOD. WOW! Thanks JESUS!” He was prosecuted for the posting. Justice Kennedy, writing for the majority, eloquently recognized the central place that the Internet now has in the “free marketplace of ideas,” and insisted that laws excluding individuals from accessing such an important forum of expression must be carefully tailored. North Carolina’s law, which imposed an absolute bar on access to sites as important as Facebook, Twitter, and LinkedIn, was far too sweeping. 

The Court has long struggled with how to reconcile the twin dictates that the government may not establish religion but must also not discriminate against religion. Where the government supports similarly situated entities, can or must it support religious institutions as well, or does such support amount to an establishment of religion? In prior cases, the Court had permitted across-the-board secular services, such as fire and police protection, as well as vouchers to private citizens that are then used at religious schools. But it has drawn the line at states providing direct financial aid to churches. In Trinity Lutheran v. Comer, the Court for the first time not only permitted, but mandated, direct financial support to a church.

The case arose when Missouri deemed the Trinity Lutheran Church ineligible to receive state funding to refurbish its playground surface because the state constitution forbade direct financial support to churches—even though the playground was otherwise eligible for state funding. The Supreme Court ruled that Missouri violated the church’s right to the free exercise of religion by denying it funding for its playground simply because of its status as a church. Chief Justice Roberts, writing for the majority, emphasized that the decision was limited to state funding for non-religious uses; that limitation presumably led Justices Elena Kagan and Stephen Breyer to concur in a 7-2 result, with only Justices Sonia Sotomayor and Ruth Bader Ginsburg dissenting, arguing that the Court should not require states to provide direct financial assistance to churches, and that in any event, the playground would be used for religious purposes. Justices Clarence Thomas and Gorsuch would have gone further than the majority, requiring the state to fund even direct religious practices where it funds similar non-religious practices. But significantly, no one else on the Court was willing to go that far.    

The Court’s most disappointing and far-reaching decision of the term was Ziglar v. Abbasi, a case dating from the sweeping roundups of Arab and Muslim men carried out by the Bush administration in following the September 11 attacks. The administration put more than five thousand Arab and Muslim immigrants in preventive detention in the first two years after September 11, not one of whom turned out to have been connected to the attacks or to have been convicted of terrorism. The Ziglar case, with which I was involved as a cooperating attorney with the Center for Constitutional Rights in its early stages, challenged the government’s imposition of harsh and punitive conditions of confinement on persons “of interest” to the 9/11 investigation, based not on evidence of terrorist involvement, but on their ethnicity or religion. The plaintiffs were Arab and Muslim men detained for months, much of it in solitary confinement, denied access to counsel or the outside world, shackled, and slammed against walls. All were cleared of any terrorist connections, but not before they had suffered grievous injury. They sued Attorney General John Ashcroft and others for money damages, under a 1971 precedent allowing victims of constitutional injuries to sue federal agents for such relief.

The case was decided by the unusual vote of 4-2, because two justices (Kagan and Sotomayor) were recused, and the case was argued before Gorsuch joined the Court. Writing for the majority, Justice Anthony Kennedy ruled that the claims could not even be heard, because they sought to hold responsible high-level government officials acting in the ostensible interests of national security. The Court had previously allowed damage suits for discrimination and harsh prison conditions, but Kennedy reasoned that this case was different because it involved national security. Had the individuals been able to get their case to court while they were incarcerated, Kennedy acknowledged, they could have sued to stop the violations. But for reasons Kennedy never adequately explained, the Court ruled that a damages remedy after the fact was absolutely barred. As Justice Breyer noted in dissent, this is particularly troubling in a case involving national security issues, both because individuals often face insurmountable barriers to getting into court while detained, as was the case here, and because after the fact courts can review the cases with the perspective and deliberation that promotes good judgment. By immunizing high-level officials from after-the-fact judicial review of their actions in times of crisis, the Ziglar decision threatens to free up executive officials to act without regard to the constitutional consequences precisely when the pressure to overreach is greatest. 

On its final day, the Court announced that it would grant review in two cases challenging President Trump’s travel ban. (I am counsel with the ACLU in one of the cases, International Refugee Assistance Project v. Trump). Lower courts have consistently barred the ban from going into effect, on grounds that it violates the Establishment Clause by targeting Muslims, and exceeds the president’s powers under the immigration laws. The government had asked the Court to stay the injunction pending its review. But the Court left the injunction in place for all foreign nationals with a connection to a person or entity in the United States, and allowed the travel ban to go into effect only for foreign nationals with no connection to the United States (as determined in the first instance by federal government officials). By reaching this result, a middle ground that does not tip its hand regarding the merits of the appeal, the Court was able to achieve relative consensus and issue a “per curiam” opinion joined in full by six justices and in part by the whole Court. Justices Alito, Thomas, and Gorsuch wrote separately to say that while they agreed with the partial stay, they would have gone further, giving the government all it requested. Significantly, however, the other six justices declined to do that—and instead chose to leave the lower court injunctions in place for all the plaintiffs before the Court, and all other foreign nationals with similar ties. The case will be argued in October, and is the first constitutional test to the Trump administration to reach the Supreme Court. The Trump administration has thus far argued for blind deference, urging the courts to ignore what Trump has repeatedly said about the order—namely, that it is designed to ban Muslims. That could be a tough argument for the Court, an independent branch charged with defending constitutional rights, to accept.     

The best news of the term was that Justice Kennedy did not retire, after widespread rumors that he might. Kennedy sits at the Court’s ideological center, and has been the swing vote in politically charged cases ever since Justice Sandra Day O’Connor retired in 2006. He is a Republican and a conservative, and often votes with his more conservative colleagues, but on this Court he has been a moderating influence. He has cast decisive votes to recognize same-sex marriage, to strike down sodomy statutes, to save affirmative action, to uphold the right to choose to terminate a pregnancy, to prohibit punishment of flag-burning, and to end the death penalty and mandatory life without parole for juveniles. He has lamented the harshness of the criminal justice system and invited a constitutional challenge to solitary confinement. If he steps down and is replaced by a hard-right conservative, vetted and approved by the Federalist Society, the Court will shift dramatically to the right—at a time when, given the Oval Office’s current occupant, the judiciary’s check on the executive branch is more essential than ever.

The travel ban won’t be the only big case before the Court next term. It has already agreed to hear cases concerning the rights of same-sex couples to equal treatment from businessmen who object to serving them on religious grounds, the rights of all of us to preserve the privacy of our whereabouts even when we carry a cellphone, the constitutionality of prolonged detention of immigrants, and whether there are any limits on egregiously partisan gerrymandering. It’s a heady lineup. No wonder Justice Kennedy isn’t retiring.   

Source Article from http://feedproxy.google.com/~r/nybooks/~3/qifsybDzygk/

The Nineteenth-Century Trump


Jonathan Ernst/ReutersPresident Donald Trump looking at a portrait of Andrew Jackson, Nashville, Tennessee, March 15, 2017

Donald Trump has often been likened to Andrew Jackson; this is welcomed and encouraged by Trump himself. President Trump has hung a portrait of Jackson prominently in the Oval Office and visited Jackson’s plantation home in Tennessee to honor his 250th birthday on March 15. He draws on the memory of President Jackson to give legitimacy to his own presidency in a number of ways, and Jackson’s brand of nationalism is all the more relevant today since it was directed, in part, against Mexico—Jackson hoped to take Texas from Mexico and annex it to the United States, a policy that eventually culminated in the war waged against Mexico by Jackson’s protégé, James Knox Polk. Jacksonian nationalism was also racial: a white man’s Americanism, excluding Mexicans, Indians, blacks, and on occasion even women.

Trump’s evocation of Andrew Jackson is intended to underscore the populist appeal of both leaders. Jackson, who served from 1829-1837, mobilized the white working class of his time—small farmers—much as Trump has sought to mobilize the white working class of our day. Nevertheless, their populist nationalisms are not identical, as historian Sean Wilentz has pointed out. Jackson firmly defended the federal government’s power over the states when South Carolina challenged it over the issue of an import tariff that, while protecting Northern industries, made certain goods in the South more expensive, particularly the cheap textiles used to make slaves’ clothing. Trump wants the federal government to shrink back from many of its activities, leaving education, science, healthcare, and the regulation of business largely to the states. Jackson was eager to reduce the federal deficit and succeeded in briefly eliminating the national debt entirely. What effect Trump’s budget will have on the deficit is far from clear (though in order to balance the budget it requires growth rates that are more than a percentage point higher than what the Congressional Budget Office estimates).

The most important parallel between Trump and Jackson lies in their rallying the white working class against ethnic minorities: Jackson against American Indians and blacks, Trump against Mexican immigrants and Muslims. Jackson’s project of “Indian Removal” was the first substantive issue his administration pursued after his inauguration in 1829. The avowed goal was to force Native Americans out of the lands east of the Mississippi that they had been guaranteed by treaties and send them, under military escort, west of the Mississippi to reservations in what is now Oklahoma and Kansas. The formerly tribal lands would then be available for white settlement. Ironically, there was little actual need at the time for new lands open to white settlers. When the Cherokee Tribe were evicted from their homeland in Georgia, in the devastating forced migration known as the Trail of Tears, that state recognized that there was no commercial market for the Cherokees’ abandoned farmlands, even with all the Indians’ improvements, and simply raffled them off.

Trump has experienced early difficulties staffing his administration, and so did Jackson. Jackson’s initial choices for Cabinet posts did not prove an effective working group. They split between followers of Secretary of State Martin Van Buren and those of Vice President John C. Calhoun. Like Trump, who seems to rely heavily on his son in law, Jared Kushner, on foreign policy, even if it means contradicting his secretary of state, Jackson turned increasingly to an informal group of advisers, jokingly disparaged as the “kitchen cabinet,” in contrast to the formal Cabinet meeting in the parlor. Jackson’s favorite, Van Buren, met with both. To staff lower federal offices Jackson initiated what was called “the spoils system”—in other words, a patronage system to reward political followers rather than a merit system seeking out competence and talent. (There was then no civil service system such as we have now.) The appointments of both Jackson and Trump have provoked surprise and alarm from contemporary observers.

Trump and Jackson share a reputation as “outsiders.” Though Trump inherited wealth, Jackson actually did come up the hard way from poverty in frontier Tennessee. He bought and sold slaves early and often in the course of his rise to wealth and influence. Once, in 1817, he sold forty people at one time for $23,000. On another occasion, after one of his slaves ran away, Jackson offered a $50 reward in the Tennessee Gazette for his recapture “and ten dollars extra for every hundred lashes a person will give to the amount of three hundred.” Three hundred lashes risked beating the man to death, but perhaps revenge outweighed financial interest. Not surprisingly, the Jackson administration consistently supported the institution of slavery, even to the point of interfering with the transmission of antislavery mail through the Post Office, in violation of federal law. Proslavery policy fit perfectly well with Jacksonian populism. Slavery and the repression of black people were at least as popular among poor non-slaveholding Southern whites as among slave-owners themselves.


Fotosearch/Getty ImagesPolitical cartoon of Andrew Jackson by Thomas Nast from Harper’s magazine, 1877

An important parallel between Trump and Jackson lies in their efforts to reshape the political organizations of their time. When Jackson’s presidential campaign first appeared, almost all American politicians avowed membership in a single political party, the Jeffersonian Republicans. Jackson and his follower Martin Van Buren succeeded in reshaping that party into the Democratic Party we have known ever since, and in the course of doing so provoked the emergence of a rival party called the Whigs. Trump too seems to need to transform the existing party system, by anchoring the Republican Party in the provincial working class, in addition to its traditional base in the business community. Whether Trump will succeed in such a dramatic undertaking—let alone serve out two terms in office as Jackson did—remains unclear. So far, he does not seem to have Jackson’s knack for political decision-making.

Donald Trump is notorious for violating present-day standards of sexual behavior. Andrew Jackson also violated the conventions of his own day, although this parallel has yet to provoke comment. In 1790 he began living with a woman named Rachel Robards, who was married to another man. Lewis Robards divorced her in 1793 on grounds of adultery, and soon afterward Rachel and Andrew married. The episode was unearthed during the presidential campaign of 1827-1828 by supporters of Jackson’s opponent, John Quincy Adams, and became a political issue. Once he was in the White House, another such difficulty emerged. Jackson appointed John Eaton Secretary of War, to be in charge of Indian Removal. Eaton’s wife Margaret (a.k.a. Peggy) had a checkered past and was ostracized by the wives of the other Cabinet secretaries as a loose woman unworthy of polite society. Jackson famously declared her “chaste as a virgin,” but could not make his Cabinet secretaries force their wives to toe his line. “I did not come to Washington to make a cabinet for the Ladies of this place,” he raged. In the end, Jackson had to dismiss his entire Cabinet to get beyond the problem.

One of the most significant—though as yet little noticed—similarities between Jackson and Trump is disregard for truth. Trump has become notorious for uttering untruths, although his critics sometimes include ill-informed factual errors along with deliberate lies when criticizing him. Andrew Jackson and his followers spread lies about John Quincy Adams when running against him for president; a preposterous one being that as ambassador to Russia Adams had procured an American girl for the sexual gratification of the tsar. Also untrue was the charge that Adams had put a billiard table in the White House at public expense. (Adams did install a billiard table, but he paid for it himself.) Playing billiards seemed an alien activity to Americans at the time, conforming to the negative stereotype of Adams as un-American, snobbish, and intellectual. To defend Jackson against charges of adultery, his campaign invented a story about a wedding between Andrew and Rachel in 1791, when, supposedly, they thought Lewis Robards had already divorced Rachel. Careful historical research by Jackson’s sympathetic biographer Robert Remini has disproven this tale. For Jackson, past events could be re-shaped to protect his honor. Jackson never apologized, never forgave, and did not shrink from violence. He participated in several duels and fights before being elected president, killing a man in one of them.

Ironically, the experiences of Jackson and Trump left them with quite different attitudes toward the Electoral College system. Jackson led in popular votes in the election of 1824, but lacked a majority in the Electoral College. In accordance with the Constitution, the presidential choice then reverted to the House of Representatives, which selected John Quincy Adams instead. Andrew Jackson had to wait until 1828 to gain a majority of electoral votes. Afterward, he advocated abolishing the Electoral College and choosing presidents by popular vote. Trump, of course, has every reason to love the Electoral College.

All in all, President Trump is by no means off the mark to call attention to Andrew Jackson as a precursor. The analogy, however, is not necessarily flattering.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/4e-Yy37-E1A/

Romania: On the Border of the Real


Sundance Selects/Why Not ProductionsAdrian Titieni as Romeo in Cristian Mungiu’s Graduation, 2016

Cristian Mungiu’s latest movie, Graduation—for which he won Best Director at Cannes last year—opens with an establishing shot of a dusty European square surrounded by small apartment blocks, then quickly cuts to an interior: a neat living room, with lamps and sofa and table. And there the camera lingers. You might think it a photograph, if net curtains weren’t moving slightly at the picture’s edge. There are a few lulling seconds of noise from the off-screen square: cars, children playing. Then abruptly a rock is thrown through the window—and the curtains flare out wildly.

This image of an interior shattered by outside forces could be the emblem for all Mungiu’s films. He loves to present stories in which someone’s integrity is assailed by external influences, and Graduation offers one of his most melancholy contraptions for testing his characters’ limitations. The setting is the Romanian city of Cluj. Romeo, a doctor, lives with his wife Magda and daughter Eliza. He is quietly pursuing an affair with Sandra, a single mother who is also a teacher at Eliza’s school; meanwhile, he is gently evading questions from his aging mother about her deteriorating health. But this system of everyday domestic duplicity is soon to be overtaken by a larger network of moral compromise.

Romeo’s obsessive goal is for Eliza to get the grades she needs from her high school exams so she can go to university in Britain. He is desperate for her to leave the country—just as he blames himself and Magda for returning to it, after leaving in 1989. (“We thought things would change,” he tells his daughter, “we thought we’d move mountains. We didn’t change anything.”) But Eliza is sexually assaulted the day before the exams; injured and in shock, she still has to take the tests. Her first exam goes badly. It is suddenly uncertain that she will get the necessary grades.

What follows is a family melodrama, taking place over the two or three days of the exams: a chain of small corruptions and unexpected calamities, as Romeo makes a deal with the deputy mayor, the chief of police, and the headmaster of Eliza’s school, involving a carousel of mutual favors, in order to have Eliza’s grades quietly doctored. And in the process, the large hinterland of Romeo’s self—his capacity for betrayal, contradiction, self-pity—is brutally revealed.


Sundance Selects/Why Not ProductionsTitieni as Romeo and Maria-Victoria Dragus as Eliza in Mungiu’s Graduation, 2016

Mungiu’s first movie, Occident, came out in 2002, when he was thirty-four. It was a small set of interlinking stories about young Romanians wanting to emigrate to the West. But he discovered his true form with his second film, 4 Months, 3 Weeks and 2 Days—a gruesomely suspenseful story about an illegal abortion in Communist Bucharest—which won the Palme d’Or at Cannes in 2007. And he followed it in 2012 with another intense narrative, Beyond the Hills, about an exorcism in a provincial Romanian monastery.

In the fifteen years of his career, Mungiu has refined his explorations in a hybrid form: melodrama filmed with naturalistic technique. The stories his films tell possess an old-fashioned three-act structure: crisis, complication, finale. His characters are starkly arranged on either side of a moral border. And yet the look is much more casual and less controlled. It’s visible in the cinematography, where random objects block the camera’s view, or the focus is adjusted in mid-shot; and also in the wonderful clutter of his sets, like the opening image (another still life) in 4 Months, 3 Weeks and 2 Days—a table with a burning cigarette in an ash-tray, a clock, a cup and saucer, a bowl, some bank notes, a cigarette package, a lamp, underwear drying on a radiator, some hand lotion, some milk, and a fish bowl with a drawing of a cityscape inside it. Only the cigarette smoke and the fish are moving.

That insouciant naturalism is what places him in what’s become known as the Romanian New Wave—a group of filmmakers who began their careers about a decade after the fall of Communism in 1989. As well as Mungiu, the group includes Radu Muntean, Corneliu Porumboiu, and, most importantly, Cristi Puiu. It was Puiu’s brilliant movie Stuff and Dough that in 2001 established the aesthetic of wild realism that would be employed, with individual variations, by every member of the New Wave. Puiu said that he found it in the American cinema of Cassavetes, but it also feels like something modeled on Lars von Trier and Thomas Vinterberg’s Dogme 95 manifesto—the use of natural light and handheld cameras, a refusal of external music: the absolute lo-fi avant-garde.

But Puiu’s true originality has been his approach to narrative. He is a master of a category of detail we experience everywhere in life and almost nowhere in art: the possibly connected, the random but still meaningful. (Another of its masters is Jim Jarmusch—and Jarmusch is an explicit influence on Stuff and Dough and on Puiu’s subsequent 2004 short, Un cartuş de Kent şi un pachet de cafea, whose title is a riff on Jarmusch’s Coffee and Cigarettes, which came out a year earlier.) In Stuff and Dough, a slacker agrees to carry black market medicines from Constanța to Bucharest. This mini-mobster premise seems to constantly imply a kind of gangster movie, but while Puiu included occasional noir tropes—a menacing SUV, a taciturn boss—these never coalesce into anything as ordered as a plot.

The reason for the rarity of this kind of ambiguity in fiction, I think, is its difficulty: it’s hard to construct a composition where random detail is held in suspension, neither meaningless nor predictably meaningful. In Stuff and Dough, Puiu made his first investigation into this problem—and it would flower in his subsequent movies, The Death of Mr. Lazarescu, Aurora, and, most recently, Sieranevada.


Mandragora/Mitropoulos FilmsAlexandru Papadopol, Dragos Bucur, and Ioana Flora in Cristi Puiu’s Stuff and Dough, 2001

In Mungiu’s work, on the other hand, the composition is always just a little too insistent. Every detail is a cause or an effect, every ambiguity eventually embalmed in resolution. That rock thrown through the window in Graduation, for instance, marks the beginning of a series of small acts of vandalism against Romeo that seem to go unexplained. But then comes a moment toward the end of the movie when Romeo, kicked out by his wife, spends the night at his girlfriend’s house. The next morning, she asks him to look after her young son, Matei. He takes Matei to the playground—where Matei throws stones at a kid who hasn’t waited in line to play on a jungle gym. In this movie where no scene is without its narrative point, it’s a depressingly closed moment: too obviously there to inform us that Matei—upset at Romeo’s affair with his mother—has been responsible for the small-scale acts of violence. The weight of the apparently random is dissolved in the acid of Mungiu’s planning.

Every cinematic New Wave—ever since the original Nouvelle Vague—has brandished its own particular manifesto of savage naturalism. In Romania, that savagery has taken two forms: an insistence on minimalist filmmaking, and a vision of post-1989 society as inescapably corrupt and corrupting, the provincial as a form of doom. “I wanted to tell the story of a compromise,” Puiu said of Stuff and Dough, and compromise has been the Romanian New Wave’s basic theme. I wonder if this is why the stories they tell so often flirt with the theatrical unities—time-limited situations of danger, where the characters are in conflict with the institutions of power. The time pressure acts as an accelerant to uncover the characters’ weaknesses—and this is especially true of Mungiu.


Sundance Selects/Why Not ProductionsTitieni as Romeo and Lia Bugnar as Magda in Mungiu’s Graduation, 2016

But this theatrical form, I began to suspect, with its improbable high-speed series of sudden illnesses and revelations, expresses a larger problem than Mungiu seems to know. The ostentatiously scruffy look of his films is designed to imply a literalism, an absolute reality. But the reality promised by the film’s images is threatened in two ways: by the melodrama of the narrative that these images construct, and by the conventionality of the images’ framing. There’s almost no shot in Graduation that isn’t a single or dual portrait. The camera never roams an interior or a landscape. Nor does it ever retreat into the far distance, or into gruesome close-ups. Everything is shot from the neat distance of a conversation—or an audience.

Maybe every film director has to find ways of refusing the forms of theater. That’s one lesson of Graduation—and it becomes more visible if you compare its contradiction between script and camera to a similar kind of discrepancy in some of Lars von Trier’s Dogme films: Breaking the Waves, for instance, where an allegorical story of a woman’s brutal self-sacrifice is told through the giddy, skittering images captured by handheld cameras. For Von Trier has always deliberately exploited his combinations of filmic elements—plots and genres and styles—which are usually kept separate. His aim is for a radical instability of tone (most notoriously perhaps in Dancer in the Dark: a musical about the death penalty). He loves to play with the multiple elements of a film, to exacerbate their potential divergences.

Mungiu, instead, has always asserted a studied aesthetic neutrality. In an interview about Graduation, he observed: “I won’t use music because there is no music in life, and I won’t signal to you as a spectator how to feel.” But true neutrality may be more elusive. For Mungiu does include music in Graduation. The film is punctuated by exquisite baroque arias, in particular Handel’s “Ombra mai fu”—conveniently being listened to within the movie, in an apartment or the car. They assert a pathos that the film itself never quite produces. And it reminded me of a moment in Richard Leacock and D.A. Pennebaker’s short documentary Two American Audiences, recording Jean-Luc Godard’s visit to NYU in 1968. Godard was asked why in his film La Chinoise he interrupted characters’ conversations with loud excerpts from Vivaldi. It seemed, said an earnest grad student, a mystery. “Why is it a mystery?” replied Godard. “When you are walking in the street, you are suddenly whistling for ten seconds, and then, you know… I mean: there is nothing more than that.” In that moment, Godard seems at once a more extravagant filmmaker than Mungiu, and a greater realist.


Cristian Mungiu’s Graduation was released in theaters in the US this spring and is now available on DVD in the UK. 

Source Article from http://feedproxy.google.com/~r/nybooks/~3/7NEd9Nl0Ooo/

Britain: When Vengeance Spreads


Leon Neal/Getty ImagesThe Finsbury Park Underground station, near the site of the June 18 attack on a group of Muslims, London, June 20, 2017

On June 19, the day after a forty-seven-year-old man from Wales, Darren Osborne, drove a van over a group of Muslims near a mosque in Finsbury Park, north London, leaving one person dead and nine injured, I went for a swim in a municipal pool a few miles from where the attack took place. The pool is a popular amenity in my community, and the diversity of those who frequent it—all races, ages, and backgrounds seem somehow represented—reflects the world city that London has become.

Arriving a few minutes before the doors opened, I fell in with four regulars, all of them non-Muslims, just as the conversation turned to the attack. Rather than express sympathy for the victims, the comments of my fellow-swimmers suggested they felt justice had been done. “What did the Muslims expect?” asked one woman. “After everything they’ve done to us,” agreed another. The only one in the group who demurred was an evangelical Christian; he argued that it was wrong to kill worshippers.

For all the gestures of inter-communal solidarity that have been given much publicity since the June 18 attack, the more significant and ominous sentiment has been one of vindication. This feeds off the logic that the actions of Darren Osborne were an inevitable and perhaps necessary response to the attacks by unhinged Islamists that have taken place in London and Manchester in the weeks before the election, in which at least thirty-five were killed.

Starting with Osborne himself, a lot of blame for the Finsbury Park attack has been heaped on the victims. “This is for London Bridge” (where the most recent jihadi attack took place, on June 3), the assailant is reported to have yelled. Richard Gear Evans, whose father’s company had rented Osborne his van, publicly regretted that Osborne hadn’t had access to “steam rollers or tanks.” (Evans has since been arrested on suspicion of stirring up racial hatred.) A far-right rabble-rouser, Tommy Robinson, who had called Osborne’s actions a “revenge attack,” made a sensational appearance on a morning television program during which he held up a Koran and declared, “There will never be peace on this earth, so long as we have this book.” Robinson’s own autobiography has since soared up the Amazon charts.

These ideas are not confined to the fringe; they appear to be held by a substantial minority. Anecdotal evidence, the prevalence of online Islamophobia (much of it untraceable owing to the use of VPNs), and a spike in cases of anti-Muslim taunting in the street suggest that many Britons, from small towns in southern England to depressed, working-class areas in the north, feel that “they” had it coming. 

Over the past three months mainstream politicians and community leaders have repeated the platitude that ISIS wants to turn communities on each other, but in truth it is immaterial whether civil war between Europe’s Muslims and their non-Muslim “hosts” is an ISIS objective. (The group’s propaganda concentrates on the desirability of Muslims killing infidels, not the other way around.) If the likelihood of communal strife has increased as a result of the Finsbury Park attack, this is because vengeful thinking is spreading across society.

The main cause of this is, of course, the terrorists themselves, but in some politicians and media figures they have found eager abetters. Following the bombing of the Manchester arena on May 22, in which several young girls were killed, there were calls for interning suspected Islamic radicals; and the prominent broadcaster Katie Hopkins demanded a “final solution.”    

Against this already troubled backdrop, the significance of Darren Osborne is that he is the first Briton to have turned on Muslims indiscriminately—and using the jihadis’ trademark weapon of the rented van. This sets him apart from Thomas Mair, the white supremacist who assassinated the Labour Member of Parliament Jo Cox in June 2016. Cox was herself a white non-Muslim—she had earned Mair’s hatred for her liberal stance on immigration—and so no community existed to retaliate on her behalf.

Finsbury Park was the first time the United Kingdom has experienced tit-for-tat communal killings since the Irish Troubles. The country’s proud self-image as a refuge of tolerance and judicious multiculturalism, usually held up in contrast to France’s less flexible notions of national identity, is dissolving. The government may not possess enough vision, empathy, and authority to lead people back to relative serenity. Theresa May’s new administration is already damaged by questions over its own survival, and will spend the whole of its (probably brief) life preoccupied by Brexit negotiations. In her policy statement for the new parliament, May promised to set up a commission that will “support the government in stamping out extremist ideology in all its forms.” That the government is taking radical right-wing ideologues more seriously is encouraging. But the foundation of the country’s anti-extremism strategy is the assumption that Britain’s three million Muslims (the number has doubled since 2001) are potentially untrustworthy.  

Since 2015, public bodies including universities and hospitals have been legally bound to monitor the people who come through their doors for signs of radicalization, causing understandable resentment among some Muslims who have been unjustly profiled and a marked reluctance on the part of many others to express themselves. “Moderate” Islamic groups cultivated by the government have been weakened by the perception that they are stool pigeons; all the while communities that are socially very conservative, but otherwise orderly and law-abiding, have been the object of attempts to entrench liberal ‘British values’, increasing the perception that Islamism isn’t the problem–Islam is.

It isn’t by banging them over the head that the members of these often insular communities will be induced to engage with mainstream British culture; reminding them of the Prophet Muhammad’s thirst for conquest, as many popular commentators are doing nowadays, is unlikely to lead them to reconsider their faith. These measures will, on the contrary, turn many Muslims further in on themselves. A recent book on British Muslims, Al-Britannia, by James Fergusson, found that the mood among them is colored by “fear, paranoia, anger and confusion.”

The example of the Bosnian War in the early 1990s, and the savagery with which Serbs and Croats turned on their Muslim neighbors, shows how rapidly co-existence can turn to violence. Britain, of course, is not the product of a partition, as Bosnia was, nor are its institutions those of a failed state. All the same, it suggests the low esteem in which the country’s figures of authority are held that the most vital contribution to communal harmony in recent days was made not by the government, the police, or London’s mayor, but by a mosque imam who used his authority to prevent Darren Osborne from being lynched after he was seized by the crowd he had tried to kill. In that moment of terrifying clarity, as they formed a cordon around their would-be assassin, it’s as if Mohammed Mahmoud and a few of his co-religionists saw the abyss opening at their feet, and straining, perhaps, against their own instincts, forced it shut again.   

Source Article from http://feedproxy.google.com/~r/nybooks/~3/qspDnt8Ad4w/

A Presumption of Guilt

Late one night several years ago, I got out of my car on a dark midtown Atlanta street when a man standing fifteen feet away pointed a gun at me and threatened to “blow my head off.” I’d been parked outside my new apartment in a racially mixed but mostly white neighborhood that I didn’t consider a high-crime area. As the man repeated the threat, I suppressed my first instinct to run and fearfully raised my hands in helpless submission. I begged the man not to shoot me, repeating over and over again, “It’s all right, it’s okay.”


Museum of Modern Art, New York/© 2017 The Jacob and Gwendolyn Knight Lawrence Foundation, Seattle/Artists Rights Society (ARS), New York‘The migration gained in momentum’; painting by Jacob Lawrence from his Migration series, 1940–1941

The man was a uniformed police officer. As a criminal defense attorney, I knew that my survival required careful, strategic thinking. I had to stay calm. I’d just returned home from my law office in a car filled with legal papers, but I knew the officer holding the gun had not stopped me because he thought I was a young professional. Since I was a young, bearded black man dressed casually in jeans, most people would not assume I was a lawyer with a Harvard Law School degree. To the officer threatening to shoot me I looked like someone dangerous and guilty.

I had been sitting in my beat-up Honda Civic for over a quarter of an hour listening to music that could not be heard outside the vehicle. There was a Sly and the Family Stone retrospective playing on a local radio station that had so engaged me I couldn’t turn the radio off. It had been a long day at work. A neighbor must have been alarmed by the sight of a black man sitting in his car and called the police. My getting out of my car to explain to the police officer that this was my home and nothing criminal was taking place prompted him to pull his weapon.

Having drawn his weapon, the officer and his partner justified their threat of lethal force by dramatizing their fears and suspicions about me. They threw me on the back of my car, searched it illegally, and kept me on the street for fifteen humiliating minutes while neighbors gathered to view the dangerous criminal in their midst. When no crime was discovered and nothing incriminating turned up in a computerized background check on me, I was told by the two officers to consider myself lucky. While this was said as a taunt, they were right: I was lucky.

People of color in the United States, particularly young black men, are often assumed to be guilty and dangerous. In too many situations, black men are considered offenders incapable of being victims themselves. As a consequence of this country’s failure to address effectively its legacy of racial inequality, this presumption of guilt and the history that created it have significantly shaped every institution in American society, especially our criminal justice system.

At the Civil War’s end, black autonomy expanded but white supremacy remained deeply rooted. States began to look to the criminal justice system to construct policies and strategies to maintain the subordination of African-Americans. Convict leasing, the practice of “selling” the labor of state and local prisoners to private interests for state profit, used the criminal justice system to take away their political rights. State legislatures passed the Black Codes, which created new criminal offenses such as “vagrancy” and “loitering” and led to the mass arrest of black people. Then, relying on language in the Thirteenth Amendment that prohibits slavery and involuntary servitude “except as punishment for crime,” lawmakers authorized white-controlled governments to exploit the labor of African-Americans in private lease contracts or on state-owned farms.1 The legal scholar Jennifer Rae Taylor has observed:

While a black prisoner was a rarity during the slavery era (when slave masters were individually empowered to administer “discipline” to their human property), the solution to the free black population had become criminalization. In turn, the most common fate facing black convicts was to be sold into forced labor for the profit of the state.

Beginning as early as 1866 in states like Texas, Mississippi, and Georgia, convict leasing spread throughout the South and continued through the late nineteenth and early twentieth centuries. Leased black convicts faced deplorable, unsafe working conditions and brutal violence when they attempted to resist or escape bondage. An 1887 report by the Hinds County, Mississippi, grand jury recorded that six months after 204 convicts were leased to a man named McDonald, twenty were dead, nineteen had escaped, and twenty-three had been returned to the penitentiary disabled, ill, and near death. The penitentiary hospital was filled with sick and dying black men whose bodies bore “marks of the most inhuman and brutal treatment…so poor and emaciated that their bones almost come through the skin.”2

The explicit use of race to codify different kinds of offenses and punishments was challenged as unconstitutional, and criminal statutes were modified to avoid direct racial references, but the enforcement of the law didn’t change. Black people were routinely charged with a wide range of “offenses,” some of which whites were never charged with. African-Americans endured these challenges and humiliations and continued to rise up from slavery by seeking education and working hard under difficult conditions, but their refusal to act like slaves seemed only to provoke and agitate their white neighbors. This tension led to an era of lynching and violence that traumatized black people for decades.

Between the Civil War and World War II, thousands of African-Americans were lynched in the United States. Lynchings were brutal public murders that were tolerated by state and federal officials. These racially motivated acts, meant to bypass legal institutions in order to intimidate entire populations, became a form of terrorism. Lynching had a profound effect on race relations in the United States and defined the geographic, political, social, and economic conditions of African-Americans in ways that are still evident today.

Of the hundreds of black people lynched after being accused of rape and murder, very few were legally convicted of a crime, and many were demonstrably innocent. In 1918, for example, after a white woman was raped in Lewiston, North Carolina, a black suspect named Peter Bazemore was lynched by a mob before an investigation revealed that the real perpetrator had been a white man wearing blackface makeup.3 Hundreds more black people were lynched based on accusations of far less serious crimes, like arson, robbery, nonsexual assault, and vagrancy, many of which would not have been punishable by death even if the defendants had been convicted in a court of law. In addition, African-Americans were frequently lynched for not conforming to social customs or racial expectations, such as speaking to white people with less respect or formality than observers believed due.4

Many African-Americans were lynched not because they had been accused of committing a crime or social infraction, but simply because they were black and present when the preferred party could not be located. In 1901, Ballie Crutchfield’s brother allegedly found a lost wallet containing $120 and kept the money. He was arrested and about to be lynched by a mob in Smith County, Tennessee, when, at the last moment, he was able to break free and escape. Thwarted in their attempt to kill him, the mob turned their attention to his sister and lynched her instead, though she was not even alleged to have been involved in the theft.

New research continues to reveal the extent of lynching in America. The extraordinary documentation compiled by Professor Monroe Work (1866–1945) at Tuskegee University has been an invaluable historical resource for scholars, as has the joint work of sociologists Stewart Tolnay and E.M. Beck. These two sources are widely viewed as the most comprehensive collections of data on the subject in America. They have uncovered over three thousand instances of lynching between the end of Reconstruction in 1877 and 1950 in the twelve states that had the most lynchings: Alabama, Arkansas, Florida, Georgia, Kentucky, Louisiana, Mississippi, North Carolina, South Carolina, Tennessee, Texas, and Virginia.

Recently, the Equal Justice Initiative (EJI) in Montgomery, Alabama—of which I am the founder and executive director—spent five years and hundreds of hours reviewing this research and other documentation, including local newspapers, historical archives, court records, interviews, and reports in African-American newspapers. Our research documented more than four thousand racial terror lynchings between 1877 and 1950 in those twelve states, eight hundred more than had been previously reported. We distinguished “racial terror lynchings” from hangings or mob violence that followed some sort of criminal trial or were committed against nonminorities. However heinous, this second category of killings was a crude form of punishment. By contrast, racial terror lynchings were directed specifically at black people, with little bearing on an actual crime; the aim was to maintain white supremacy and political and economic racial subordination.

We also distinguished terror lynchings from other racial violence and hate crimes that were prosecuted as criminal acts, although prosecution for hate crimes committed against black people was rare before World War II. The lynchings we documented were acts of terrorism because they were murders carried out with impunity—sometimes in broad daylight, as Sherrilyn Ifill explains in her important book on the subject, On the Courthouse Lawn (2007)—whose perpetrators were never held accountable. These killings were not examples of “frontier justice,” because they generally took place in communities where there was a functioning criminal justice system that was deemed too good for African-Americans. Some “public spectacle lynchings” were even attended by the entire local white population and conducted as celebratory acts of racial control and domination.

Records show that racial terror lynchings from Reconstruction until World War II had six particularly common motivations: (1) a wildly distorted fear of interracial sex; (2) as a response to casual social transgressions; (3) after allegations of serious violent crime; (4) as public spectacle, which could be precipitated by any of the allegations named above; (5) as terroristic violence against the African-American population as a whole; and (6) as retribution for sharecroppers, ministers, and other community leaders who resisted mistreatment—the last becoming common between 1915 and 1945.

Our research confirmed that many victims of terror lynchings were murdered without being accused of any crime; they were killed for minor social transgressions or for asserting basic rights. Our conversations with survivors of lynchings also confirmed how directly lynching and racial terror motivated the forced migration of millions of black Americans out of the South. Thousands of people fled north for fear that a social misstep in an encounter with a white person might provoke a mob to show up and take their lives. Parents and spouses suffered what they characterized as “near-lynchings” and sent their loved ones away in frantic, desperate acts of protection.

The decline of lynching in America coincided with the increased use of capital punishment often following accelerated, unreliable legal processes in state courts. By the end of the 1930s, court-ordered executions outpaced lynchings in the former slave states for the first time. Two thirds of those executed that decade were black, and the trend continued: as African-Americans fell to just 22 percent of the southern population between 1910 and 1950, they constituted 75 percent of those executed.

Probably the most famous attempted “legal lynching” is the case of the “Scottsboro Boys,” nine young African-Americans charged with raping two white women in Alabama in 1931. During the trial, white mobs outside the courtroom demanded the teens’ executions. Represented by incompetent lawyers, the nine were convicted by all-white, all-male juries within two days, and all but the youngest were sentenced to death. When the NAACP and others launched a national movement to challenge the cursory proceedings, the legal scholar Stephen Bright has written, “the [white] people of Scottsboro did not understand the reaction. After all, they did not lynch the accused; they gave them a trial.”5 In reality, many defendants of the era learned that the prospect of being executed rather than lynched did little to introduce fairness into the outcome.

Though northern states had abolished public executions by 1850, some in the South maintained the practice until 1938. The spectacles were more often intended to deter mob lynchings than crimes. Following Will Mack’s execution by public hanging in Brandon, Mississippi, in 1909, the Brandon News reasoned:

Public hangings are wrong, but under the circumstances, the quiet acquiescence of the people to submit to a legal trial, and their good behavior throughout, left no alternative to the board of supervisors but to grant the almost universal demand for a public execution.

Even in southern states that had outlawed public hangings much earlier, mobs often successfully demanded them.

In Sumterville, Florida, in 1902, a black man named Henry Wilson was convicted of murder in a trial that lasted just two hours and forty minutes. To mollify the mob of armed whites that filled the courtroom, the judge promised a death sentence that would be carried out by public hanging—despite state law prohibiting public executions. Even so, when the execution was set for a later date, the enraged mob threatened, “We’ll hang him before sundown, governor or no governor.” In response, Florida officials moved up the date, authorized Wilson to be hanged before the jeering mob, and congratulated themselves on having “avoided” a lynching.

In the 1940s and 1950s, the NAACP’s Legal Defense Fund (LDF) began what would become a multidecade litigation strategy to challenge the American death penalty—which was used most actively in the South—as racially biased and unconstitutional. It won in Furman v. Georgia in 1972, when the Supreme Court struck down Georgia’s death penalty statute, holding that capital punishment still too closely resembled “self-help, vigilante justice, and lynch law” and “if any basis can be discerned for the selection of these few to be sentenced to die, it is the constitutionally impermissible basis of race.”


Devin AllenProtesters in Baltimore after the death of Freddie Gray, April 2015; photograph by Devin Allen from his new book, A Beautiful Ghetto. It includes a foreword by Keeanga-Yamahtta Taylor and an introduction by D. Watkins, and has just been published by Haymarket Books.

Southern opponents of the decision immediately decried it and set to writing new laws authorizing the death penalty. Following Furman, Mississippi Senator James O. Eastland accused the Court of “legislating” and “destroying our system of government,” while Georgia’s white supremacist lieutenant governor, Lester Maddox, called the decision “a license for anarchy, rape, and murder.” In December 1972, Florida became the first state after Furman to enact a new death penalty statute, and within two years, thirty-five states had followed suit. Proponents of Georgia’s new death penalty bill unapologetically borrowed the rhetoric of lynching, insisting, as Maddox put it:

There should be more hangings. Put more nooses on the gallows. We’ve got to make it safe on the street again…. It wouldn’t be too bad to hang some on the court house square, and let those who would plunder and destroy see.

State representative Guy Hill of Atlanta proposed a bill that would require death by hanging to take place “at or near the courthouse in the county in which the crime was committed.” Georgia state representative James H. “Sloppy” Floyd remarked, “If people commit these crimes, they ought to burn.” In 1976, in Gregg v. Georgia, the Supreme Court upheld Georgia’s new statute and thus reinstated the American death penalty, capitulating to the claim that legal executions were needed to prevent vigilante mob violence.

The new death penalty statutes continued to result in racial imbalance, and constitutional challenges persisted. In the 1987 case of McCleskey v. Kemp, the Supreme Court considered statistical evidence demonstrating that Georgia officials were more than four times as likely to impose a death sentence for the killing of a white person than a black person. Accepting the data as accurate, the Court conceded that racial disparities in sentencing “are an inevitable part of our criminal justice system” and upheld Warren McCleskey’s death sentence because he had failed to identify “a constitutionally significant risk of racial bias” in his case.

Today, large racial disparities continue in capital sentencing. African-Americans make up less than 13 percent of the national population, but nearly 42 percent of those currently on death row and 34 percent of those executed since 1976. In 96 percent of states where researchers have examined the relationship between race and the death penalty, results reveal a pattern of discrimination based on the race of the victim, the race of the defendant, or both. Meanwhile, in capital trials today the accused is often the only person of color in the courtroom and illegal racial discrimination in jury selection continues to be widespread. In Houston County, Alabama, prosecutors have excluded 80 percent of qualified African-Americans from serving as jurors in death penalty cases.

More than eight in ten American lynchings between 1889 and 1918 occurred in the South, and more than eight in ten of the more than 1,400 legal executions carried out in this country since 1976 have been in the South, where the legacy of the nation’s embrace of slavery lingers. Today death sentences are disproportionately meted out to African-Americans accused of crimes against white victims; efforts to combat racial bias and create federal protection against it in death penalty cases remain thwarted by the familiar rhetoric of states’ rights. Regional data demonstrate that the modern American death penalty has its origins in racial terror and is, in the words of Bright, the legal scholar, “a direct descendant of lynching.”

In the face of this national ignominy, there is still an astonishing failure to acknowledge, discuss, or address the history of lynching. Many of the communities where lynchings took place have gone to great lengths to erect markers and memorials to the Civil War, to the Confederacy, and to events and incidents in which local power was violently reclaimed by white people. These communities celebrate and honor the architects of racial subordination and political leaders known for their defense of white supremacy. But in these same communities there are very few, if any, significant monuments or memorials that address the history and legacy of the struggle for racial equality and of lynching in particular. Many people who live in these places today have no awareness that race relations in their histories included terror and lynching. As Ifill has argued, the absence of memorials to lynching has deepened the injury to African-Americans and left the rest of the nation ignorant of this central part of our history.

The Civil Rights Act of 1964, arguably the signal legal achievement of the civil rights movement, contained provisions designed to eliminate discrimination in voting, education, and employment, but did not address racial bias in criminal justice. Though it was the most insidious engine of the subordination of black people throughout the era of racial terror and its aftermath, the criminal justice system remains the institution in American life least affected by the civil rights movement. Mass incarceration in America today stands as a continuation of past abuses, still limiting opportunities for our nation’s most vulnerable citizens.

We can’t change our past, but we can acknowledge it and better shape our future. The United States is not the only country with a violent history of oppression. Many nations have been burdened by legacies of racial domination, foreign occupation, or tribal conflict resulting in pervasive human rights abuses or genocide. The commitment to truth and reconciliation in South Africa was critical to that nation’s recovery. Rwanda has embraced transitional justice to heal and move forward. Today in Germany, besides a number of large memorials to the Holocaust, visitors encounter markers and stones at the homes of Jewish families who were taken to the concentration camps. But in America, we barely acknowledge the history and legacy of slavery, we have done nothing to recognize the era of lynching, and only in the last few years have a few monuments to the Confederacy been removed in the South.

The crucial question concerning capital punishment is not whether people deserve to die for the crimes they commit but rather whether we deserve to kill. Given the racial disparities that still exist in this country, we should eliminate the death penalty and expressly identify our history of lynching as a basis for its abolition. Confronting implicit bias in police departments should be seen as essential in twenty-first-century policing.

What threatened to kill me on the streets of Atlanta when I was a young attorney wasn’t just a misguided police officer with a gun, it was the force of America’s history of racial injustice and the presumption of guilt it created. In America, no child should be born with a presumption of guilt, burdened with expectations of failure and dangerousness because of the color of her or his skin or a parent’s poverty. Black people in this nation should be afforded the same protection, safety, and opportunity to thrive as anyone else. But that won’t happen until we look squarely at our history and commit to engaging the past that continues to haunt us.

  1. 1

    “The Mississippi Black Codes were copied, sometimes word for word, by legislators in South Carolina, Georgia, Florida, Alabama, Louisiana and Texas,” writes the historian David M. Oshinsky in Worse Than Slavery: Parchman Farm and the Ordeal of Jim Crow Justice (Simon and Schuster, 1996), p. 21.  

  2. 2

    See “Prison Abuses in Mississippi: Under the Lease System Convicts Are Treated with Brutal Cruelty,” Chicago Daily Tribune, July 11, 1887. 

  3. 3

    See “Southern Farmers Lynch Peter Bazemore,” Chicago Defender, March 30, 1918, and “Short Shrift for Negro,” Cincinnati Enquirer, March 26, 1918.  

  4. 4

    Stewart E. Tolnay and E. M. Beck, A Festival of Violence: An Analysis of Souther Lynchings, 1882–1930 (University of Illinois Press, 1995), p. 47. 

  5. 5

    Stephen B. Bright, “Discrimination, Death and Denial: The Tolerance of Racial Discrimination in Infliction of the Death Penalty,” Santa Clara Law Review, Vol. 35, No. 2 (1995). 

Source Article from http://feedproxy.google.com/~r/nybooks/~3/HBmlf6Ro_QE/

Fathers & Daughters

Louie

2017

Horace and Pete


KC Bailey/FXLouis C.K. as Louie and Ursula Parker and Hadley Delany as his daughters Jane and Lilly in season 5 of Louie, 2015

Louie, the FX show that the comedian Louis C.K. wrote, directed, and starred in for five seasons, is credited with expanding the possibilities of the half-hour television comedy. Its first-person, expressionistic sensibility was something new for the sitcom when the show debuted in 2010. Another way to appreciate its cultural significance and its genius is to consider this: Louie may be the first sitcom featuring children that’s wholly inappropriate for children to watch. The show’s title character, based on C.K. himself, is a divorced stand-up comedian with shared custody of his two school-aged daughters, six and nine years old in the first season.

Louie is a rumpled, out-of-shape, unfashionably goateed white man who has not aged into comfortable success. On days when he has his kids, he picks them up from school, cooks their dinner, reminds them to do their homework, tucks them in at night, and brings them to school again the next morning. At forty-one, Louie is baffled by the shape his life is taking, especially by the fact that his divorce has conferred on him full parental authority every other week. The show is set to jazz, and the sweeping, wheeling camera and music are the chief instruments of comedy, along with C.K.’s reaction shots—wincing, dubious, resigned.

Louie takes fatherhood seriously. His own father, he tells a friend in one episode, was “not around,” and he wants to do it differently. But the show is always threatening to pull the rug out from under Louie’s great-dad conceit—not because he isn’t a good father, but because the value of his work is unknown and unknowable. The same social forces that have brought more men into the web of child care have also revealed that children do fine with all kinds of caretakers: grandparents, nannies, day care workers—pretty much any reliable, kind adult could perform any one of Louie’s tasks with no detriment to his daughters.

He cares for them in a state of contingency. Does it really matter that he cooks their meals from scratch? Do all these clocked hours make a difference in the end? And is he hiding behind the kids to avoid dealing with other parts of his life? “You’ve been a good father,” his ex-wife acknowledges, urging him to audition for a late-night show hosting spot he’s been shortlisted for. “But no one needs a father very much.” It’s a great bit of deadpan three seasons into a show that has made so much of Louie’s fatherhood. “Yes, you would be spending less time with the girls,” she goes on, exasperated, “but it’s because you’d have a job, Louie.”

No one can say for sure how much the girls need him, but there’s no question that he needs them. When he doesn’t have the kids, his days are a wasteland: poker with a raunchy group of comedian friends, ice cream benders, masturbation to the local newscasters on TV. He dates a variety of emotionally and psychologically damaged women as well as some well-adjusted ones with whom it never works out. At night, he does gigs at the Comedy Cellar and Caroline’s, and the stand-up bits are interspersed through each episode.

C.K.’s stand-up is genial yet dirty. He has pondered child molesters (“From their point of view, it must be amazing, for them to risk so much”) and bestiality (“If no one ever said, ‘you should not have sex with animals,’ I would totally have sex with animals, all the time”), as well as more run-of-the-mill aspects of the post-divorce dating scene (“I like Jewish girls, they give tough hand jobs”). He finds no end of occasions to mime sex acts, especially masturbation, onstage.

When he started releasing hour-long comedy specials ten years ago, C.K.’s material was long on kids, marriage, men and women, and getting older and fatter. These subjects are still a big part of his acts, especially in Louie, but he’s gotten even more traction with observations about our national mood disorder: the irritable, selfish public behavior and private melancholy of Americans in the smartphone age (or sometimes, more specifically, affluent white Americans). He’s most effective when he uses himself as representative American jerk and melancholic. In a Saturday Night Live appearance in April, he described a recent trip out of town during which he felt he wasn’t getting his fair share of white privilege because the hotel staff didn’t treat his lost laundry as a top-level emergency.

C.K. beams when he laughs at his own jokes and his amusement seems genuine and deep, taking the edge off his provocations as well as his depressive observations about his own life. In 2017, his latest stand-up special, released this spring, he has a riff on suicide always being an option. “But don’t get me wrong, I like life. I haven’t killed myself. That’s exactly how much I like life. With a razor-thin margin.” In Louie, his will to live is almost exclusively bound up with his daughters: “I was thinking that on Jane’s eighteenth birthday,” he tells a fellow parent, “that’s the day I stop being a dad, right?… The day I just become a guy, not daddy. I just become some dude. I think on that day”—he pauses—“I might kill myself.” He looks as surprised as his interlocutor at where his train of thought has taken him.

Sexual perversity is around every corner in Louie, whether it’s an old woman who opens her apartment door stark naked, flashes Louie, then hisses “Pig!,” or a jittery bookstore clerk (played by Chloë Sevigny) who insists on helping him track down an old flame and then gets so turned on by the project that she masturbates to orgasm in the middle of their conversation in a coffee shop.

You could play the masturbating woman strictly for laughs, or it could be something darker, unnerving. C.K. tips it toward comedy (there’s a brilliant exchange of glances between Louis and the only other person in the coffee shop, an austere male barista), but not too far so; the scene has a complexity of tone typical of the show as a whole. Until she actually puts her hand between her legs, we don’t know what Sevigny’s character, who has the air of an increasingly agitated, eccentric loner, is going to do. When it happens, the gesture of pulling aside the waistband of her skirt is as startling as an act of violence.

Over the show’s run, Louis has been the victim of two incidents of sexual assault: one by a dentist who seems to have put his penis in Louis’s mouth while he was sedated in the dental chair, and one by a woman who’s so angry that Louis won’t go down on her after she gave him a blow job that she smashes his head against a car window until he capitulates. Is it funny? Yes, but it’s also something other than funny. Sitcoms of the last twenty years like Curb Your Enthusiasm, Arrested Development, or 30 Rock have been innovative and dazzlingly funny, but they’re also uniformly light, issuing a steady, rhythmic pulse of levity at predictably short intervals, making for great bedtime viewing.

Louie is something different, a comedy about bodily shame and sexual despair and the narrowing possibilities of middle age whose turns are unpredictable, enigmatic, and carry emotional risks. The jokes push beyond the familiar conceit that Louis is a sad sack who can’t get a date. In fact he often does have a date, and sex, but that only opens him up to a world of unsettling discoveries about himself and his partners. Louie’s New York is a sexually permissive playground in which hardly anyone can get what he or she wants. More often than not, people’s sexual appetites alienate them from one another, or even cause harm.

Meanwhile, the children are in jangling proximity to all this perversion. The scenes involving child actors are of course clean, but they’re only a frame away from Louie’s off-hours depravity, raising anxious questions about modern fatherhood. Can a divorced father on the prowl also make himself safely and intimately available to his children? Can Louie reign in his depressive, pessimistic, and self-destructive impulses and give himself over to his daughters’ needs for hours and days at a time while still retaining enough of himself to write comedy? The answer, and the source of the show’s rogue joy, is yes—incredibly, yes.

There probably won’t be another season of Louie, C.K. has said. Instead, he recently released on his website the self-funded show Horace and Pete, also written and directed and starring C.K. Though the show has a distinguished celebrity cast (including Edie Falco, Alan Alda, Jessica Lange, and Steve Buscemi), C.K. made the show quietly in a matter of weeks and released it without advance publicity. Horace and Pete is not a comedy. It is, in fact, a tragedy, and there’s no missing the fact that this show is also about fatherhood, or, more precisely, patrimony. Three siblings in their late forties and early fifties have inherited a family business, a one-hundred-year-old bar in a formerly white, working-class neighborhood in Brooklyn that’s now gentrifying. The bar doesn’t make much money, but it’s now extremely valuable real estate. Should they sell it?

The look and feel of the show is a sharp contrast to Louie, so much so that it seems like an exercise in voluntary artistic deprivation for C.K.: multiple cameras, just three interior sets, almost no music. And in place of the associative logic of Louie, with its segments in ambiguous relationship to one another, we have the unspooling of a linear plot. The technical elements of Louie that made the show move and breathe are absent here. Instead, the focus is on writing and acting and bodies against an unchanging backdrop; characters reveal their stories in nothing more cinematically sophisticated than monologues.

It is, as critics have noted, much like a filmed play, and its themes of family dysfunction and vexed paternal lineage evoke not only twentieth-century American playwrights like Arthur Miller and Eugene O’Neill, but Henrik Ibsen before them. And while the show is set in the present—bar patrons yak about the Trump campaign—it also relies on some conspicuously archaic turns of plot. There are revelations of secret paternity. Not one but two characters have given an unwanted infant to a sibling to raise as his or her own.

The bar, Horace and Pete’s, has been passed down through many generations of the Wittel family, always to sons named Horace and Pete after the original proprietors. But now the lineage is threatening to break down. Depending on how you look at it, the reason for the breakdown is either bad fathering or the rise of Wittel women, or both. Sylvia (Falco), the oldest of the siblings, is trying to convince her brothers Horace and Pete (C.K. and Buscemi) to sell it. The bar had previously been owned only by male relatives, but because their father died without a will, Sylvia is now a common-law co-owner with her brothers. “This place is worth millions,” she tells Horace—they could divide the money and get on with their lives.


Louis C.K.Alan Alda, Jessica Lange, Steve Buscemi, Louis C.K., and Edie Falco in Horace and Pete, 2016

Getting out, moving on, and starting over are big with Sylvia; she has the least emotional attachment to the family business of all the siblings—in fact, she loathes it. When the siblings were young, their mother left their violent father and raised the kids by herself uptown. Sylvia cherishes her mother’s bravery and indulges no sentimentality about the generations of Horaces and Petes. “My father was a wife beater and a fucking brute and a narcissist. And thank god our mother got us out of here. How many wives have been beaten in this place?” she says to Uncle Pete (Alda). And to Horace: “Ma got out. She got us out. It kills me that you’re back here.”

But Horace, played by C.K., doesn’t want to sell. When their father died a year ago he left his job as an accountant to take over the bar. He is helped by his brother Pete, a heavily medicated psychotic who spent much of his life hospitalized for mental illness but now lives in a room behind the bar and helps out with housekeeping, and by belligerent, casually racist Uncle Pete, who works behind the bar and yells at the bar’s new hipster customers to get off their phones.

If the family sold the bar, Pete and Uncle Pete would probably never find work again. But that’s not the only reason Horace doesn’t want to sell. Horace has been living for the last year in his parents’ old apartment above the bar, still decorated in a dingy 1970s palate of brown, rusty orange, and avocado and olive greens, betraying nothing of himself. There is no self-expression in running the business: Horace has stepped into a role occupied by seven other Horaces before him, and that, one senses, is the appeal of it. His patrimony gives form and structure and, potentially, meaning to his life: “This is Horace and Pete’s, and I’m Horace,” he says to Sylvia.

It’s hard not to see a parallel in there somewhere to C.K. himself, who seems to be taking a break from being a vaunted comedy innovator by seeking refuge in patently older modes. The sets, the multicamera format, the dramatic staging, the long monologues, and the antiquated plot devices all refer to earlier points in the history of drama and television; this is not going to be something new, everything about the show seems to scream. Of course, it’s such an anomaly among today’s shows that it feels like something new.

Horace, in any case, likes the bar precisely for its lack of novelty or growth potential. “Does every business have to make a profit?” he asks practical Sylvia. “Can’t any place just be a place? People come here. They’ve come here for a hundred years.” To which Sylvia’s typical reply is, “A hundred years of misery is enough.” Tough, foul-mouthed, unsparing, mirthless, Falco’s leonine Sylvia has a moral authority and radiance in spite of her cruelty, at times, to various family members. A hundred years of misery seems no exaggeration—misery soaks these characters, a joyless lot.

Yet one has the feeling that the show wants to defend the family business, and maybe even hold out the possibility that some things about the older, whiter, prefeminist order it represents are worth defending. No character can quite find the words to do so, however. Uncle Pete and Horace merely keep pointing to the bar’s sheer endurance. A community fixture, an informal neighborhood institution, a working connection to the past—it’s no small thing; do we have to give it up for lost just because its owners were wife beaters? On the other hand, this particular institution is a bar that caters to the neighborhood’s hard drinkers, where at least one patron has died of alcohol poisoning and at least one other has committed double homicide. C.K. stacks the deck against Horace and Pete’s, and subtly evades a reckoning.

Sylvia finally prevails (though not in the way she intended) and has the last word, an ascendance that seems inevitable. But the show leaves us with an interesting twist on what has seemed, broadly speaking, to be a conflict between the Wittel men and the Wittel women.

The last episode contains a flashback to the siblings’ childhood, the decisive day that the mother and children sneak out of the house and leave their father for good. We see the elder Horace (also played by C.K.) hit his boys, pull his wife (played by Falco) by the hair, and generally frighten and intimidate everyone in the family—except for the teenaged Sylvia, who comes home after curfew defiant and ends up having the last word even with her enraged father. Until now, the show has discussed paternal legacy—financial and otherwise—as something passed down from father to son, but the scenes of young Sylvia with her two parents point to a loophole in the patriarchal order. It’s not from her mild-mannered mother that Sylvia inherited her toughness and ferocity—it’s from Dad.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/lp_VoDqUooM/

The Language of Diane Arbus

In response to:

The Art of Difference from the June 8, 2017 issue

To the Editors:

In an otherwise characteristically sensitive piece on Diane Arbus [“The Art of Difference,” NYR, June 8], Hilton Als repeats without qualification and as a truism that Diane Arbus “used the word ‘freaks’ to describe [her] subjects….” While often repeated, and in this case possibly unintentional in the implicit breadth of its meaning, nothing could be further from the truth, and the promulgation of the idea harms the reputations of both the photographer and the writer.

Als makes it clear that he objects to the use of the word “freaks,” which he finds disparaging, but he seems to have missed the precision of Arbus’s language.

Although Arbus did say that she “adored freaks,” and that they made her feel “a mixture of shame and awe,” she was using the term in a highly specific and tightly limited sense. As she herself was at pains to point out, she used the term “freak” solely to describe persons with bizarre physical abnormalities who made a living by means of the commercial exploitation of those abnormalities, namely people who worked in Freak Shows. This group did include the giant Eddie Carmel, the dwarf Lauro Morales, and the midget friends at home, but it did not include all giants, all dwarfs, or all midgets, or all the thousands of other people she photographed.

What a blessing for her that she is dead. Imagine knowing that one’s life’s work, which had been devoted to exploring the myriad permutations of what it is to be human, was frequently summed up with a simple, slang, divisive insult for which one was then erroneously given credit.

Neil Selkirk
New York City

Hilton Als replies:

If you say so.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/NURNgg5uVPY/