Thursday, June 30, 2016

Looking Back and Ahead

One of the books I’m planning to read this summer is Chuck Klosterman’s book, But What If We’re Wrong: Thinking About the Present As If It Were the Past, published earlier this month by the Blue Rider Press, a Penguin imprint. (Klosterman, whose work appears regularly in Esquire, GQ, and The New York Times Magazine, is best known for his acerbic essays evaluating pop culture and modern mores. His books Eating the Dinosaur and Sex, Drugs, and Cocoa Puffs will be familiar to at least some, I’m sure.)  I bought the book because its slightly outrageous theme—how our world will appear to people attempting to evaluate and explain it five hundred years into the future—called out to me. What he has to say, I’ll find out soon enough. But I thought that I’d take my leave of you all for the summer by answering the question, or attempting to answer it, first on my own.

To begin to wonder how we will seem to people in the distant future—in the first weeks of summer in 2516, say—we would probably do best to think for a long moment about how people five hundred years in our past seem to us. 1516 was a long time ago. And I find myself able, therefore, to make its events sound distant and wholly unfamiliar. But I can also make the year 1516 sound fully familiar and recognizable….and I find myself able to do both those things on the transnational level and on the level of the individual.

On the global level, I could, for example, describe a world that has nothing at all to do with our own, a world of treaties no one’s ever heard of (the 1516 Treaty of Noyon, for example, in which France—to our thinking oddly—granted hegemony over Naples to Spain in exchange for Spain recognizing France’s claim to Milan, or the Treaty of Brussels, signed that same year, that established peace between France and the Holy Roman Empire) and battles like the Battle of Younus Khan that are obscure today even to relatively astute students of world history. But I can also describe the world of 1516 in a way that will be strangely familiar: that year half a millennium ago featured a Middle East in turmoil (the Battle of Younus Khan, fought near Gaza City, was basically fought between the Turks and the Egyptians to see who would be the dominant force in the region), Europe in endless agony over the degree to which its nations wished to be each other’s allies (the Treaty of Brussels, also mentioned above, was at least in part about the degree to which the nations of Western Europe could function as partners and peaceful neighbors), countries vying with each other to do business in ever-increasing volume with China (Rafael Perestrello, a cousin of Mrs. Christopher Columbus, became the first European to arrive by sea in mainland China for the purpose of developing trade relations in 1516), and Europeans being somehow able to integrate republican ideals with vicious anti-Semitism (the Jews of Venice were forced from their homes into a ghetto, Europe’s first, in 1516 as well). Putting it that way, the world of 1516 doesn’t sound so foreign, does it?

And that dual way of looking at the past that works on the macro level works just as well on the personal level.  The world in 1516 was wholly alien from the one in which we live…but only in a certain sense. There were, obviously, no cars, no trains, no electric lights, no internet, no television (even not cable), no recorded anything, no computers, no nukes, no e-books, no telephones, and no cellophane. I could make a much longer list of things too we take for granted that were unknown in the sixteenth century, but that’s only one way to think about life five hundred years ago….and in a different sense life was not at all that different from life today. Young people grew up and fell in love. Parents struggled over the right way to raise their children. Children felt burdened but also challenged by the demands put upon them by their parents. Students studied to pass their examinations. Soldiers served in their nation’s armies. Composers wrote music. Painters painted. Preachers preached. Teachers taught. Dancers danced. The old found the young brash and foolish. The young found their elders annoying and stodgy. Employers found their employees lazy and demanding. Employees found their bosses imperious and greedy. People feared illness and death, but people became sick and died anyway. Babies were born. Eggs were hatched. The sea was filled with fish. That doesn’t sound so unfamiliar, does it?

And now we move to the future. It is 2516 or, as I expect the Jews will privately call it, 6276. Everything will be different and nothing will be. I can’t even begin to imagine what technology will be like. Even the most basic questions about life in the twenty-sixth century resist answering. Will human beings still live only on earth? Will computers still be external machines that people use to do things, or will their abilities by then be so fully internalized into the body that people and computers will no longer exist as independent entities? Will the countries of today’s world still exist? For that matter, will any countries exist…or will globalization at a certain point make it bizarre to think of the world’s peoples divided down into national states instead of as fellow-citizens of a global republic? Will we have visited nearby stars? Will their citizens have visited us? How much of today’s landmass will be submerged beneath a vast global ocean once the ice caps melt entirely? Will there still be Coke? To none of those questions, do I have any ready answers.

And yet, on the other level, the level of the individual, I imagine that things will be unchanged. The heart will still follow its own rules.  People will occasionally wake up next to the wrong person and have to bear the consequences of their own folly. Friends will fall out and reconcile…or not. Children will live up their parents’ expectations in some ways and disappoint them in others. People will yearn for wealth, only to discover later on how little money can really buy. People will grow older as the years pass, but only some will succeed at doing so gracefully. Parents will describe their children’s favorite music as noise; children will know their parents well and not at all. Siblings will occasionally resent each other. Love will be elusive…as will also be happiness. No one will really think he or she is earning enough or being compensated adequately. The sky will still be blue.

If I can narrow my gaze to the world I know best, things will also change and be the same. Nations will rise and fall, but Israel, the am olam, will endure…always on the brink of disaster but never quite vanishing from the pageant of history. When people imagine that only the ḥareidi world will survive and the rest of us—including everybody not self-isolated into hermetically-sealed communities and self-deprived of the option of going out into the real world and earning a living there—that we will eventually assimilate into the general population and be gone from the world, they’re missing the point of being an am olam, an eternal people, in the first place. There will always be people, I fear, for whom intellectual and spiritual integrity are inconstant with “real” religious life, but in my opinion it is those people who are far less likely to survive the onslaught of time. To hide, after all, only works for as long as you can remain hidden; when that option no longer exists, the only remaining choices will be to live in a world you have no training to encounter or else to flee to even more remote hideaways located in even less accessible places. Eventually, all who play that game will lose…and those whose faith requires engagement with the world and who can therefore adapt to an ever-changing environment without resenting the challenges life places upon the living—those, in my opinion, will be the ones who endure.

When I compare the Jewish world of 1516 with the one I imagine for 2516, the details change but the pillars upon which the world stands—Torah study, public and private worship, and a thick sense of inner-communal responsibility—remain the same. The seas will rise, but Jerusalem will endure. The sonim will cackle and jeer, but the power of a single individual reciting the Shema with a full heart and a willing spirit unsullied by ulterior motive will prove mightier than even the most oppressive regime. It’s hard for me to imagine the world dishing out worse punishment than the Jewish people has already endured, yet the divine spirit that guides and protects the House of Israel—as regretfully opposed to individual Israelites—exists outside the context of action and reaction, of violence and stoic endurance. The Jews of 2516 may wonder how we ever survived our own history…but much would be fully familiar if we could only peer through the looking glass that far into the future. There will still be children falling asleep at Pesach seders. Rabbis will still be wondering what to say about Parashat Tzav. The price of truly fine t’fillin will still seem exorbitant (including to those who shell out the dough and buy them anyway). And no one, even half a millennium from now, will truly understand what the etrog is meant to symbolize. All this will endure! And, in the end, the part that never changes will prove more profound than the part that does. In that regard, the history of the am olam is the same as the story of any individual: the part that changes as the years pass, for all it feels distressing to contemplate, is the less crucial part of the mix…and the part that is inviolate and unchanging, the spark of divinity that animates the soul and which exists without reference to time past or time future, that is the part that matters, that truly counts.

A week from today, I hope to be sitting at my other work desk, the one located on Gad Tedeschi Street in Jerusalem, and working on some of my summer writing projects. I am never more at peace, never more relaxed or more focused, than when I sit at that desk and look out at the walls of the Old City hiding in the distance behind the tall trees that line Rechov Ha-askan, the street that leads from our neighborhood north towards the Haas Promenade, one of Jerusalem’s loveliest look-out points. There is real peace for me in that place…and I wish I could share it will all of you. Barring any unexpected adventures like we encountered in 2014, I’ll write to you all again upon our return. In the meantime, I bless you all from afar and in advance with the peace of Jerusalem, and pray that God keep us all safe until we are together again in August.

Thursday, June 23, 2016

Life and Death

Last week, I wrote about my disinclination to lump the people killed in Orlando together as “victims” of an armed madman’s rage or even as "people" whose horrific deaths were somehow more terrible than if they had been murdered in some less dramatic way as individuals rather than en masse as a group. A number of readers took issue, asking me if I really thought that there was something inherently demeaning to each deceased individual in the suggestion that his or her death was more awful than it would otherwise have been because of the numbers involved. Given that I’ve said the same thing many times about victims of terror in several other contexts—that, human life being of inestimable value, there is something slightly off about taking context into account when evaluating the deaths of innocents—I was slightly surprised by those responses. And yet, even now after having rethought the issue over these last days, I stand by my original remark and continue to believe that, had Omar Mateen killed one single person instead of fifty, that person’s death would be no less awful to contemplate—and the loss to the world no less acute—because forty-nine other people weren’t killed by the same crazy person on the same evening. As I wrote last week, each person murdered in Orlando was a world unto him or herself, a universe of history, ability, intelligence, and potential. As the great Hans Fallada wrote, “every man dies alone and on his own terms,” i.e., as an individual, as a person, as a complete world.

That there is something perverse and demeaning in the effort to evaluate the worth of a human life is key. Indeed, for most Americans the whole notion of assigning a specific value to a specific person’s life sounds like something connected to the slave trade in olden times, a wretched, degrading effort that rested on the traders’ ability to maximize profit by “correctly” determining the dollar-value of any particular man, woman, or child put up for sale. (Just for the record, when the Bible discusses the case of the naïf who pledges the value of a specific person to the Temple, a pledge both ridiculous and so inherently sacred that it cannot simply be set aside or ignored, it fixes the specific amounts to be paid based on age and gender specifically to avoid the possibility of people actually attempting to say what a specific individual is worth monetarily. I’ll write about that very interesting chapter some other time.) When expressed as a philosophical concept, the concept that life is of incalculable value sounds relatively inarguable. But the corollary notion that the lives (and, by extension, the deaths) of specific individuals can also not be rationally or ethically hierarchized feels seriously less undisputable…at least most of the time and in most contexts.

We establish such hierarchies all the time, after all. There are more people in need of all varieties of organ transplants than there are donors, and so must the regulatory agencies that supervise such procedures establish a way to determine whose life among those in need, say, of a new kidney or heart is more worth saving. The version of the famous “trolley” conundrum that imagines the conductor of a runaway train having to decide whether to steer a train he cannot stop towards a terminally-ill elderly man or a healthy child is another good example: for all we claim that no one can assign actual worth to any human life, it would be the rare person indeed who would say that it doesn’t matter what the conductor decides because the death of a centenarian suffering from an incurable disease with just weeks to live is no more or less tragic than the death of a seven-year-old in excellent health who could conceivably live on for eighty or ninety more years and whose potential contributions to the world cannot yet even be imagined. And even if some among us would argue that there is something inherently wrong in saying that the life of a child is more valuable than the life of an elderly person, then surely even they would admit that the situation feels different when the numbers are ratcheted up sufficiently and the conductor’s choice is differently imagined to present him having to choose between aiming his train at a single old man and a group of fifty or eighty children. Or between steering his runaway train towards an elderly man and his ailing wife, and aiming it at a nuclear power plant that, if seriously enough damaged, could conceivably spew radioactive material into the atmosphere and end up harming or even killing countless thousands. In such cases, it feels possible to determine the relative value of two sets of human lives without transgressing any ethical boundaries. But how to do it and what criteria to bring to bear in making such a decision—that is a different matter entirely.

Horrible things have happened in our world just recently. An eleven-year-old girl from Bexley, Ohio, was killed earlier this week when a tree struck by lightning fell on her cabin in a summer camp in Bennington, Indiana. A little boy from Omaha died after being snatched by an alligator from some shallow water at the edge of lagoon at the Walt Disney World Resort in Lake Buena Vista, Florida. Fourteen boys and girls aged nine to eleven, none of whom was wearing a life jacket, died when the raft they were paddling around on in Lake Syamozero in the part of northwestern Russia called Karelia capsized during a sudden storm. And all this has happened since the horrific murder of fifty young people out for a night of carefree dancing and flirting in Orlando.

To say that each of these horrific incidents is each other’s precise equal because, the value of life being unquantifiable, the loss of any life is a tragedy no different from the loss of any other life sounds reasonable enough at first blush. But is that really how we feel? The girl from Ohio was the victim of a terrible accident that, truly, none could have foreseen. The little boy in Florida, not so clear: given that five alligators were taken from the lagoon after the boy’s death and that the boy was obeying the sole sign in the area that merely said “No Swimming,” a sign that most parents (myself included) would easily take to mean that swimming is forbidden but not splashing around at water’s edge, it feels as though there must be some real responsibility of both the moral and legal varieties to be assigned. Does that make the boy’s death more tragic than the girl’s? When put like that, the question feels bizarre even to pose aloud. But agreeing that a disaster none could have foreseen is different in kind from—and thus reasonably deemed more awful than—one that could easily have been averted doesn’t feel as though it constitutes a rejection of the principle that no individual’s life can be judged more valuable than anyone else’s.

The situation regarding the children in Karelia make this point even more forcefully: this disaster happened at a holiday resort in which the children were theoretically being watched over by responsible adults, yet they were permitted out on a huge lake without life jackets despite cyclone warnings being in effect for the region. (Four adults, including the director and deputy director of the hotel and two water-safety instructors have been taken into custody and will probably face criminal charges in the matter.) So although it feels wrong to attempt, however ridiculously, to determine which loss of life constituted the greater tragedy, it also feels natural to categorize these horrors in terms of the degree to which they could possibly have been avoided. In other words, attempting to evaluate the pristine value of any human life seems wrong. But establishing a hierarchy based on the degree to which a given individual’s death could have been averted feels not only right, but reasonable and just. But is that what we want to assert: that the loss of life can best be evaluated in terms of whom there is to blame? The children in Russia died because the people theoretically watching over them were inept and irresponsible. But what then should we say about the dead in Orlando, none of whom died because of ineptitude or incompetence on the part of others but who were gunned down in cold blood by a murderer who knew exactly what he was doing? Does the fact that they were brutally executed make their loss different in kind from the death of that poor child in Indiana who died when a tree fell on her bunk?

In my opinion, the most reasonable approach to these issues would be to agree that we can assign a degree of tragedy to deaths based on the degree to which they could possibly have been averted without contravening the basic principle that human life is possessed of incalculable value.  But another idea suggests itself to me as well: that, without attempting to evaluate the worth of lives, we can entirely reasonably assign value to deaths based on the posthumous good they inspire.

And that brings me to my final example of a young person who died last week: Mahmoud Rafat Bardran, a fifteen-year-old Palestinian teenager in the wrong place at the wrong time as members of the IDF opened fire on Palestinians who were attempting to murder Israeli civilians driving west on Route 443 outside Jerusalem by throwing rocks and firebombs at randomly chosen cars. The dead boy does not appear to have been among those attempting to take innocent Israeli lives; he and some friends had gone swimming after breaking their Ramadan fast and were returning home in a taxi when their vehicle was accidentally hit by gunfire. To describe his death as a tragedy does not require any specific moral courage; how can the accidental death of a teenager not be deemed tragic? In that sense, he joins the others as innocent victims of circumstance. But by mentioning such a politically charged death in the same context as the others, I sharpen my earlier point: his life was worth no more or less than anyone else’s, but the worth, so to speak, of his death will be determined posthumously by the effect, if any, it has on the world.

There is already no dearth of individuals lining up to assign blame. For some, his death was, to quote one Palestinian official, a “cold-blooded assassination.” That seems to me an example of almost grotesque grandstanding by someone prepared to stand on the back of a dead child to score some political points, but to wave his death away as mere collateral damage only works if you are prepared to share that thought with his grieving parents, if you are prepared to say that the unwarranted death of an unarmed teenager riding home in a taxi is somehow less tragic than the death of an innocent murdered by terrorists while eating ice cream in the Sarona Market in Tel Aviv.



The stories are not each other’s equivalents. The Palestinian boy was killed accidentally by people behaving nobly in the defense of innocents, whereas the people at the Sarona Market were killed by people behaving criminally and, to say the least, ignobly. But, in the end, none of them deserved to die…and that’s my exact point:  no innocent life is more or less valuable than any other, but deaths can be evaluated in terms of what they bring in their wake.In most contexts, this pill feels easier to swallow because it doesn’t feel like there could be another side to the story: who could argue against more sturdy bunks in summer camps, more clear signage at resorts where alligators might possibly be lurking near children playing in shallow water, or more stringent safety training for people charged with watching over children in boats or on rafts? In the context of the Middle East, however, nothing is ever that simple. Still, what single incident could be more likely to move the parties involved to seek and find a way peacefully to co-exist in the same corner of the world than the death of a child? You could say the same about all the other victims of terror or of the response to terror, of course, including the victims of last week’s horrific shooting in Tel Aviv. But what if this specific incident were somehow to bring the world to its senses? It feels unlikely. It could hardly be more unlikely. But unlikely isn’t impossible…and with that hopeful thought in place, I conclude my ninth year of writing weekly with this, my 355th more or less consecutive letter to you all.

Thursday, June 16, 2016

Orlando Furioso

Part of me wants it to be a gay thing. A closeted gay man, out only to himself and the people he meets surreptitiously on dating sites that allow him to mask his identity (and so not actually out to them either in any meaningful way) finally finds bearing the burden of being himself only to himself too great to bear and he snaps. Seeking out a club full of happy gay people enjoying an evening of dancing and partying, none of whom appears to be viewing his or her sexual identity as an unbearable burden, he decides to take the ultimate revenge on them for daring to live their lives unencumbered by subterfuge and unsaddled by shame.  Living in a state in which assault rifles are sold in strip malls to anyone over eighteen years of age who has never been charged with or convicted of a felony, convicted twice of drunk driving, involuntarily committed to a mental health facility, or the subject of a restraining order, the man buys a gun and lots of ammunition, then sets out to make the point as viscerally as possible that he is nothing like those weirdos who go dancing in gay nightclubs. There’s something virtuous about this narrative too, because it focuses on the victims and remembers them as innocent targets of violent prejudice rather than merely as innocent bystanders who had the misfortune to be in the wrong place at the wrong time. In that sense, explaining this as a kind of anti-gay pogrom carried out by a sole Cossack honors the memory of the dead by refusing to obscure the reason they died in a way that referring to Orlando as an example of “senseless murder” would suggest: it wasn’t senseless, because it was fully intentional. And it is a satisfying narrative for the nation as well because it makes the whole thing about the shooter, thus something we can move past simply by turning the page and reading instead about some other catastrophe somewhere.

Another part of me, however, wants it to be about guns. That too is a satisfying narrative, or a semi-satisfying one. A man who is clearly crazy—because how could someone who carries out the cold-blooded murder of strangers enjoying an evening out in a dance club possibly not be a deranged person?—a man who is obviously deranged takes his place in the unholy line-up of other crazy people who have committed mass murder with guns. Some of the citizens on that line-up have become famous or semi-famous, like Jared Lee Loughner, Adam Lanza, Eric Harris, Dylan Klebold, Nidal Hassan, or Dylann Roof. Others, like James Holmes, Wade Page, Syed Farook, and Tashfeen Malik, all of whom committed terrible crimes, somehow failed to gain the attendant celebrity generally accorded to mass murderers in our country. But they all are united in my mind by the fact that they had, all of them, easy access to the guns they used to kill and didn’t have the moral strength to resist the demons that urged them forward to perpetrate their horrific crimes. (For the record, James Eagan Holmes murdered twelve people in a movie theater in Aurora, Colorado, in 2012. Wade Michael Page murdered six people in the Sikh temple in Oak Creek, Wisconsin, earlier that same year. Farook and Malik were the perpetrators of the massacre in San Bernardino last December.)

It’s satisfying, this narrative, partially because it explains something that would otherwise feel inexplicable, partially because it explains—or rather explains away—the crime as the insane act of a deranged individual and leaves us free to draw the conclusion we all so desperately want to draw: that no further action is required, that crazy people always do crazy things, that this was a disaster…but not one that could have been prevented. And that seems key as I survey the opinions flying around the blogosphere: that we, the people, find a way to believe that we didn’t do anything wrong, that he, the insane perpetrator, is—or rather, was—the criminal. If he were still alive, he could be tried in a court of law. But he isn’t and so he can’t…and with that grim thought in place, those who embrace this version of the narrative too are free to turn the page and read about something less distressing.

This is the narrative put forward by Senator Chris Murphy of Connecticut and his fellow filibusterers, whose basic point was that, if gun purchases were more restricted in certain specific ways, then fewer bad people would have guns. Leaving aside Second Amendment issues, there are still deep problems with this narrative.  Criminals are by definition law breakers, so tightening up existing laws will only affect people who are law-abiding. Furthermore, guns like hunting rifles, which no one would dream of banning entirely, can also be used to commit horrible crimes. Most important of all, the sheer number of guns out there at the moment—about 300 million according to the Congressional Research Service—makes it unlikely that any effort to change the rules for acquiring guns would matter much for decades, if not scores of years, to come.  But the “guns narrative” is satisfying nevertheless because blame renders cogent the inexplicable. And that, more than anything, seems to be exactly what we want: for this whole incident to be explicable.

And then there’s the Islamicist narrative, the one that casts Orlando not as Charleston, but as San Bernardino. An American-born Muslim of Afghan descent is somehow radicalized and embraces the barbaric militarism of Islamic extremism. Pausing in the middle of the massacre to reiterate to some random 911 operator that he was acting as an ISIS operative, the murderer in this narrative too grants us the right to qualify his act as explicable…because he himself has explicated it. Seeking to murder innocents as a way of expressing his hostility to Western culture in general and the country of his birth—our country—in particular, the shooter was, according to this narrative, neither crazy nor confused. Indeed, according to this narrative, he was entirely aware of what he was doing, which was doing his best to bring to these shores the kind of terror that the residents of ISIS-occupied Syria and Iraq know all too well and which ISIS has already brought to Paris and Brussels. The question of whether Omar Mateen was a “real” ISIS operative is irrelevant: either he was or he wasn’t, but the bottom line is that it hardly matters if he was a self-appointed operative or one taking specific instructions from his handlers across the sea because, regardless, he killed not for personal gain or out of any animus against any of his victims, but as an act of Islamic martyrdom. There is something satisfying about this narrative as well. It grants international stature to what would otherwise be a solely American tragedy. It explains the deed, however perversely, as a kind of political statement rather than one prompted by “mere” insanity. But although it is true that no one seems to know how to prevent these acts, it is also true that even as draconian a measure as Donald Trump’s proposed temporary ban on Muslim immigration or even entrance to our country would not address the danger posed by home-grown jihadists like the Omar Mateens of this world.

The key to all of the above narratives is that they all attempt to explain a deed that would otherwise be deemed inexplicable…but none is entirely convincing. The world is full of closeted gay men who do not turn to mass murder to express their frustration with their lot in life. Millions of people buy guns and behave fully legally and responsibly with them. The large majority of Muslims who live and thrive here do not become radicalized or intoxicated with the siren call of Islamicist martyrdom. So all of these lines of explanation say something about what happened in Orlando, but none explains it entirely.

By almost every conceivable measure, we are a sophisticated nation. But a lot of the cutting-edge culture in which we take such pride is, I fear, a mere patina the only obscures the child-like, impulsive, cowboy deep within the American soul. We pride ourselves on being a nation of peacemakers and consensus builders. But I begin to wonder if that isn’t only how we enjoy thinking of ourselves…but if, just beneath the surface, we aren’t a nation of gunslingers that not only doesn’t truly abhor gun violence, but secretly—or not so secretly—revels in it. We couldn’t possibly loathe more the violent extremists who perpetrate terrorists attacks on innocents at home and abroad, but there is something in our collective American soul that admires violence, that is just a bit fascinated by it. We abhor murder and rape. But Hollywood turns out an almost endless series of movies that pander shamelessly to a movie-going public that is far more transfixed than repulsed by acts of savagery and violence. We speak endlessly about how much we yearn for peace. But no one buys video games that feature peoples living in peace and learning pacifically to co-exist in God’s world.

To blame Orlando on the degree to which the shooter turned his back on our gentle society to embrace the fiery rhetoric and siren call of the barbarian elements in his own religious world…is at least a little to miss the point. Since 9/11, ninety-four Americans have died at the hands of violent jihadists. By comparison, about half as many people, forty-eight, have died at the hands of far-right extremists. (Click here for both complete lists.) But in the years between 2001 and 2014, almost 221,000 people in the United States were victims either of murder or non-negligent manslaughter. For American men between the ages of fifteen and twenty-nine, gun homicide is the third-leading cause of death. If every single day of the year France were to endure a mass shooting like the one last year that took 130 innocent lives, their annual rate of gun homicides would still be lower than ours.

And that is the dark side of the story: that Omar Mateen, disgruntled and enraged, was tapping into something dark and terrible deep within the American psyche that doesn’t hate violence anywhere nearly as passionately as we never tire of saying that we do. That he perpetrated a crime of unspeakable evil goes without saying. But for Americans merely to wave him away as a crazy person without acknowledging the degree to which we have created a culture that accommodates violence and fosters a Wild-West kind of gun culture that no one seems to know how to defuse—that would be an example of self-serving rhetoric at its least introspective.

I can’t even bring myself to look at the portraits of the dead. As I wrote a few weeks ago in an entirely different context, each was a universe of experience and potential, a world of intelligence and emotion. Each had a backstory and a future to invent. None was a victim or a fatality; each was the whole world. As a nation, we must mourn our dead and pay homage to their memory. To do so while simultaneously looking away from the incredible number of gun homicides in our country and wondering how we can stem that vicious, horrific tide…that is not to honor them at all, but to treat their deaths like statistics instead of taking their loss as a moral challenge that the nation that lost them should and must now face.

Thursday, June 9, 2016

Ruth, Naomi, Hillary

Our rabbis, clever sages who knew how to read exquisitely slowly and carefully, found something fishy in a word in the Book of Ruth that most people who know the book well, myself surely among them, have passed by a thousand times without thinking to notice, much less thoughtfully to interpret.

The word in question is only nineteen verses into the story. Naomi, a widow old enough to have married off both her sons but also young enough to imagine herself bearing more children, has decided to return from Moab, her current domicile, to her homeland, to Judah. What she was doing in Moab, the country across the Jordan from Judah, is simple enough to explain: there had been a terrible famine at home and so Naomi and her husband Elimelech moved east to wait it out in a place where there was apparently plenty of food. But things in the Bible (or in life) never go quite as planned. Elimelech died in Moab, leaving Naomi with her two sons. Eventually, they married, choosing local girls as their wives. But things in the Bible (or in life) really never do go as planned and, before the famine wound down, the boys—now young husbands—themselves died. And so was Naomi left with two daughters-in-law, Orpah and Ruth.

And now the story begins to get interesting. The famine finally ends and the three prepare to remove to Judah. But even though they actually do set out on their journey, they don’t get very far before Naomi comes to her senses and tells her daughters-in-law that they don’t owe this to her, that it would be more than acceptable to her for them to return to their parents’ homes and revert to their original status as Moabites of Moab. There do not appear to have been formal conversion rituals in the time in which the story is set, the era of the Judges that followed the initial conquest of Canaan, so the women’s status is at best ambiguous: they had been married to Israelites and so were deemed Israelite themselves, but now that their husbands are dead and buried, they’re in ethnic limbo—not exactly Moabites any longer but connected to Israel only by ties that were buried with their husbands.  Orpah demurs briefly, but eventually she takes Naomi up on her offer and goes home. But Ruth sticks with Naomi and Naomi, because she can clearly see how committed to going along with her Ruth is, eventually relents. And so the two of them, Naomi and Ruth, set forth from Moab on their not-too-long journey to Judah, to a homeland Ruth hasn’t ever known.

And now we get to our word, one among the 2,039 that together constitute the tenth shortest book of the Bible. The phrase in which our word appears is in relatively easy Hebrew: va-teilakhna sh’teihem ad bo·ana beit-lechem. And it’s easy to translate too: “and so the two of them walked together until they reached Bethlehem.” So far, so good…but there’s a tiny issue with the second word: if we’re talking about two women, it should be sh’teihen, not sh’teihem. The latter word would work if the plural subject referenced two men or even a man and a woman. But if it’s two women, then the rules of grammar require a feminine suffix and the word would then be sh’teihen. But it isn’t. And thereby hangs an interesting tale.

The Torah takes a dim view of crossdressing, formally forbidding men to dress up like women and women, like men. (You occasionally hear this verse used to condemn transgendered people presenting themselves other than as their biological bodies would suggest they should, but that seems exaggerated to me: even the ancients understood Scripture here to be speaking specifically about people who dress up like members of the opposite gender to gain entry to places that only women or men are allowed for their own dishonorable reasons.) But the Book of Ruth is set in a time when the laws of the Torah were either not widely known or not widely observed—there are several examples of this in the book—and so our sages imagined the masculine suffix (which is not even a word, just a single letter) there to hint to us that Naomi and Ruth dressed up like men for their journey home to Judah.

It was, apparently, that kind of world. No one was too safe on the nation’s highways. But men were safer than women and so Naomi and Ruth made the choice to be men, or at least to present themselves as men, to go where only men safely could go. And it worked: eventually they arrived safe and sound in Bethlehem and our story commences in earnest.

It’s that image of Ruth and Naomi dressed up like men that stays with me. They could, of course, have demanded to be treated fairly and equitably. They could have asserted their natural right to travel on the nation’s highways unmolested and unbothered by predatory males eager to take advantage of women traveling on their own. They could have done a lot of things, but they chose, if not actually to masquerade as men permanently, then at least to present themselves as manly enough to discourage would-be assailants or harassers.

As I reread the Book of Ruth this last week as part of my lead-up to Shavuot, I was struck by the fact that my study was interrupted by the A.P.’s announcement that Hillary Clinton has become the presumptive Democratic nominee for President, which would make her the first woman nominated by a major political party to run for President. If she wins, of course, she will be the first woman in our nation to serve a President.

Given our nation’s more than slightly conflicted attitude towards gender in general, Mrs. Clinton now finds herself in a strange situation. If she is perceived as behaving “like a man” (a thought further complicated by the fact that it’s hard even to say what that means exactly), then she risks alienating all those who are drawn to the possibility of allowing a woman to crash through the ultimate glass ceiling and serve her nation as our first female president. But if she insists on behaving “like a woman” (whatever that means), then she will clearly lose the votes of a certain slice of the electorate that would only be able to countenance a female president if she appeared somehow to be manly enough to make her actual gender an irrelevancy.

Confusing the soup even further is the fact that the notion of gender-based affect is itself suspect in the minds of most forward-thinking citizens. On the one hand, we want people to behave according to a canon of norms associated with their gender and are unkind to people who appear to want to sit on one side of the aisle and vote on the other: consider the difference between calling a man manly and a woman mannish, or between referring to a woman as possessed of womanly virtue and man being called effeminate. But on the other we also seem eager to tear down irrational gender-based distinctions in life and culture—in a world in which women routinely become doctors and men routinely go into nursing, it seems slightly retro for there even to be separate Oscar categories for “best actor” and “best actress.” (Indeed, in a world that would never countenance referring female dentists or lawyers as dentistesses or lawyeresses, it feels quaint even just to use the word actress these days to refer to female actors.)

All that being the case, Mrs. Clinton’s gender constitutes a complicated riddle for Americans to work through. Nevertheless, even people who are not planning to vote for her can surely take pride that we have set to rest yet another instance of irrational gender-based bias, just as the nomination and election of President Obama can be celebrated by all, including his non-admirers, as the ultimate example of America setting the ultimate race-based barrier to rest.  So the simple fact of Mrs. Clinton being a woman should be something Americans should celebrate without reference to her specific policies or chances actually to win the presidency…and surely also not with reference to the degree to which she appears to embody or not to embody a set of stereotypes associated with womanliness or femininity. (Are those words synonyms? The fact that I’m not sure is also a point worth pondering.) It should surely be possible to celebrate Mrs. Clinton’s accomplishment without getting stuck on the ridiculous question of whether she is an appropriately female woman or an excessively mannish one…whatever those terms mean in today’s America. But when I think of poor Naomi and Ruth—two heroic figures whose bravery and cunning led, albeit a bit circuitously, to the birth several generations later of King David, Ruth’s great-grandson, and thus will lead, bi-m’heirah b’yameinu, to the eventual redemption of the world—when I think of them forced to pretend they were men to take their rightful place in their own society lest they come to harm on their way from the margins to the center, I also think of Mrs. Clinton and marvel at how far we’ve come.

No one has been formally nominated by anyone at all, yet the 2016 presidential election has already turned into one of the oddest presidential campaigns our nation has known. But even before anyone wins, the American people itself has won by following its rejection of race-based limitation—informal and surely illegal but wholly real until it suddenly wasn’t—on a citizen’s right to run for the highest office in the land with a rejection of the parallel gender-based limitation, one also un-enshrined in law and rarely mentioned in polite company but also entirely real in terms of the effect it had on women’s aspirations for political office.

I suppose Bernie Sanders deserves mention in this complex of ideas as well. He was, after all, the first Jew (and also the first non-Christian) ever to win a state in a presidential primary. So it feels right to see his campaign—and his twenty-three primary wins—as constituting a kind of third leg in the repudiation of irrational prejudice based of race, gender, or religion.


Maybe our nation really is growing up! Others paved the way, obviously. (Shirley Chisholm, for example, cleared a path both for President Obama and for Hillary Clinton when she, a black person and a woman, ran for president in 1972.) And the issues in play remain complicated. But the days of women having to dress up, either literally or figuratively, like men to be considered worthy candidates for public office are clearly over. And so our nation joins India, Israel, Germany, and the U.K.—as well as many smaller countries like Ireland and Iceland—in setting aside as irrelevant the concept of gender when it comes to choosing an able national leader.  My own feelings about all of this year’s crop of candidates are fairly conflicted. (More on that in the months to come.) But I know progress when I see it. And I think therefore that Mrs. Clinton’s designation as the presumptive candidate should be something of which we can all be proud. If her nomination leads to a national discussion of gender-based issues—and particularly to the timely demise of the notion that men or women are supposed to “be” one way or the other, and that failing to behave according to these pre-conceived norms is a sign of mental confusion, emotional distress, or moral turpitude—then that would be a very positive development indeed.

Thursday, June 2, 2016

Harambe

So a scorpion is standing on the bank of a wide river wondering how he’s going to get across when he suddenly notices a frog dozing on a nearby lily pad. Seeing an easy solution to his dilemma, he approaches the frog and asks if the frog would agree to swim across the river with him, the scorpion, on his back. The frog thinks for a minute, then declines. “How do I know you won’t sting me?” he asks reasonably. The scorpion, having expected the question, has a good answer at the ready. “Obviously I won’t harm you. Why would I? If I were to sting you,” he notes entirely plausibly, “you would die and we’d both drown.” The frog considers the response, then helpfully agrees to take his fellow creature across to the other side. And so they set off for the other bank and are just about exactly halfway across when the scorpion suddenly does sting the frog. As the venom seeps into the frog’s bloodstream and the paralysis sets in that will now kill them both, the frog summons up whatever strength he has left to ask the scorpion why he could possibly have chosen to do such a thing…and particularly because the scorpion’s betrayal of his own promise will now inevitably lead not only to the frog’s death but to the scorpion’s own demise as well. “What can I do?” the scorpion replies just before they both slip beneath the water. “It’s my nature….”

It’s a famous story. I remember it being told to great effect in The Crying Game, Neil Jordan’s terrific 1992 movie starring Stephen Rea, Forest Whitaker, and Jaye Davidson. But it’s way older than that. In the Talmud, for example, we read that Samuel, one of the greatest talmudic sages, once took note of a frog swimming across a river with a scorpion on its back. When they reached the other side, the scorpion stung some unfortunate soul who just happened to be passing by. That’s how these things work, Samuel then commented: when your time is up, your time is up…and even the least likely partnership can be brought to bear by Providence to enforce God’s edict.  That’s not exactly the same story, of course, nor does it teach the same lesson, but the image of the frog with a scorpion on its back is exactly the same…and that specific image appears as well in Sanskrit and old Persian literature where it is featured in stories that use it to good effect to teach different lessons of various sorts. A full survey of such ancient stories would take us too far afield of the topic I want to write about this week, but the interesting detail is that the image itself of a frog ferrying a scorpion on its back across a river is a constant…and the lesson taught in the version cited above—that the difference between animals and people is precisely that animals are hard-wired to act in certain specific ways and have no control to behave otherwise— is certainly worth taking seriously. Nor should we pass blithely by the corollary of that thought: that people, in this wholly unlike animals, do have that kind of control…if they choose to exert it to overcome their natural inclination to behave in some specific way that their moral compass recognizes as wrong or perverse. In other words, all God’s creatures come predisposed to behave in certain ways. But only humans possess the ability to override those predispositions and thus to behave as they see fit, not as their natures demand.

That story and its moral popped into my mind the other day when I was reading about the death of poor Harambe, 17, the 450-pound silverback gorilla shot dead the other day by a zookeeper at the Cincinnati Zoo.

The story is essentially a simple one. A family of five—a mother and her four children—were enjoying a day at the zoo when suddenly one of those kids, a little boy of three, somehow climbed through the protective barrier intended to keep visitors from coming too close to the animals they’ve come to observe. He fell into a shallow moat intended to keep the gorillas from approaching the barrier and was plucked from the waters by Harambe, whose intentions were not at all clear. I’ve watched the video several times (click here to see it if you haven’t) and concur, without any specific zoological training to buttress my opinion, that the only thing that was clear was that nothing at all was clear. At moments, Harambe appears to be acting almost protectively towards the child, helping him to his feet and almost gently touching the boy’s hand. But then he begins to drag the child by his feet first through the water and then across the floor of his enclosure, the little boy’s head bouncing up and down on the concrete as he is dragged off. What would have happened next, no one will ever know, of course, because Harambe was almost immediately shot dead by zoo workers who then rescued the boy.  The boy, fortunately, was not terribly hurt physically. If he was traumatized by the incident, I don’t imagine we’ll ever find out. Nor do we need to know. Three-year-olds are resilient, though, and we can reasonably hope for the best in his regard.

The real question has to do with the gorilla. Harambe was a very strong animal. The point was made by the director of the Cincinnati Zoo that silverbacks can crush coconuts with their bare hands, an image no doubt put forward because of the similarly in size between a coconut and a three-year-old’s head. (I’m guessing it takes a lot more strength to crush a coconut.) Because they can take up to seven minutes to work, the use of tranquilizer-darts was not feasible in a situation like this one. Nor was there much time to consider how to respond, much less to debate the matter thoughtfully or to take counsel with experts: a child’s life was in danger and the zoo officials had to decide on the spot whether to save the child by the only effective means available to them or to tranquilize the animal and hope nothing too bad happened while the darts took their time to work. Nor was it at all helpful that there was a crowd of onlookers, including the child’s mother, screaming while this was all unfolding, thus potentially unnerving or upsetting the animal and prompting him to act more, not less, violently than he might otherwise have. So the situation was grave, the amount of time to weigh options was negligible, and the assumption that Harambe, a noble-looking beast whose pose and affect are both almost human in his official portrait (reproduced above), would somehow rise up over his animalness to behave gently and kindly towards the boy was zoologically absurd. It is true that gorillas are generally herbivores, so the chance that Harambe might have mistaken the boy for his lunch were almost nil. But that he could easily have inadvertently killed the boy is, as far as I can see, beyond question.

All that being the case, the hue and cry over the decision to save the boy by shooting the gorilla dead is all the more bizarre. What makes human beings human is precisely our ability—often ignored but always real—to direct our own behavior by engaging in a process of moral decision-making that we can then use to override the genetic predispositions with which we, as animals ourselves, come pre-equipped at birth. We do have the ability to behave gently and kindly when our genetic predisposition would be to act violently and without regard for the safety or well-being of others. But gorillas are, in the end, animals. They can be adorable and they are certainly more sophisticated beings than cockroaches or field mice. But they lack the ability to reason morally in the sense that human beings do…and we forget that at our own peril. The zoo staff in Cincinnati acted correctly, making the split-second decision to value human life over an animal’s even if that animal belongs to an endangered species. The bottom line: you’re only allowed to value the life of a gorilla over the life of a little boy if you would be prepared to stick to your guns if the boy in question was your own son or grandson. Otherwise, you’re just posturing to make a point over the potentially dead body of somebody else’s child.

What interests me about this whole story are its implications for the way we view the world in general. All living human beings belong to the species homo sapiens (“wise person”); the other species that once compromised the larger homo genus—homo erectus, for example, or homo neanderthalensis—are long gone. It’s just us now…and the specific name we have chosen to assign to ourselves declares that our distinctiveness from our forebears lies precisely in that we—as opposed to they and certainly to non-human fauna—are capable of bringing some combination of learning, understanding, and ethical insight to bear in the decision-making process that we ourselves celebrate as the very hallmark of being human.

And yet we continually decline to allow that specific dimension of humanness to serve as the foundation stone upon which we stand as we view the world. Stereotyping, imputing to others an animal-like inability to override genetic pre-sets when attempting to find a moral path forward, deriding fellow-humans as beasts who specifically cannot behave other than violently or harshly—all of these prejudicial approaches to the world merely excuse unethical behavior by making it somehow a consequence of the fault in someone else’s stars rather than the result of unprincipled decision-making.

When I hear people, for example, attempting to excoriate terrorists who murder innocents by saying things like “that’s just what those people are like,” or “they’re violence-prone reprobates, so what do you expect?” or comments of that sort, all they’re really doing is excusing their behavior with reference to the inability of the perpetrator to decide not to perpetrate his or her crime. In fact, to attempt to insult people with reference to their genetic make-up, their religion, or their nationality is actually to say the precisely opposite thing—that they are somehow not fully responsible for their actions. That’s the stance we have to combat, I believe.  Harambe would not have been responsible for his actions even if he had killed that child. But that’s precisely because he was an animal, not a human being.

I’ve occasionally noted from the bimah that every single guard at Treblinka or Majdanek was once an innocent babe nursing at its mother’s breast, that none of them—despite ending up among the most grotesquely depraved criminals ever to walk the earth—none can have his or her actions excused with reference to religion, nationality, or political affiliation. Each chose the path of utter depravity and indifference to human suffering. Each wholly and totally rejected the concept that life is a gift from God, that the life of every individual human being is of inestimable value. Each embraced criminality on a scale never before known to humankind. But none had to! Just as none of the 9/11 murderers had to choose a life of terror and violence. Perhaps in our day, that lesson is even more important to say out loud. Islam didn’t make them do it. Their Saudi (or Egyptian or whatever) citizenship didn’t make them do it. Their teachers or imams didn’t make them do it either. Each had the possibility of choosing to revere life and to behave decently towards others, yet each chose to behave otherwise. To deride their crime as a function of their faith and not as the decision made by those specific people to behave in that specific way—that is not to damn them but to excuse them. And the same is true of every other knife-wielder or suicide bomber.

Harambe’s death was a tragedy. A poor beast who did nothing wrong was obliged to pay with his life for behaving naturally and normally. If any good can come from this whole incident at all, however, it would come not from encouraging us to think ill of animals, but from reminding us all exactly how it is animals and human beings differ.




Thursday, May 26, 2016

Fifty Years On

Why is it I can believe how old I am and how old my children are, but am having such difficulty in believing—by which I specifically do not mean “coming to terms with” but actually believing—that tomorrow, May 28, 2016, is going to be the fiftieth anniversary of my bar-mitzvah, which took place (obviously) on May 28, 1966? That’s the question I’d like to address in this space this week!

I really can believe how old I am. (I might as well!) And I can also believe, more or less, how long I’ve been married. I can believe lots of things! But what I can’t quite grasp is how half a century can possibly have passed since that fateful day—and in terms of my future life it truly was fateful—how a full half-century can have passed since that day on which I, like every bar-mitzvah boy before me and since, stepped uncertainly forward as my name was sung out, grasped the wooden rollers of the Torah scroll that lay open before me, and, in my thirteen-year-old’s still-highish voice, pronounced the blessings that somehow constituted the liturgical equivalent of the threshold I was expected at that tender age to step over into manhood. Or at least into adulthood. Or rather into the specific combination of the two that everybody says matters incredibly in terms of who you are in the world even if you do have to go back to junior high school the following Monday. (Note to younger readers: in olden times before there were middle schools, “junior high school” was where you went to wait out the hellish interval between elementary school and high school and, occasionally, to learn something.) I was a young thirteen at my bar-mitzvah too, as I recall, even the advance heralds of impending pubescence still considerably further down the pike than I myself was at the moment.

But it wasn’t my boyish demeanor or my slightly-too-big suit from Barneys—much less my clip-on necktie—that I remember the most vividly from that day: it was my mother sitting in the front row between her own aunts Bea and Alice and crying as she focused not on me or my great accomplishment in becoming a man, much less on my haftarah, but rather on the death of her own mother only a few months earlier on, a loss which both overshadowed and lent unexpected meaning to my big day.

I was a precocious lad, I suppose, but I truly recall thinking that, with the death of my grandmother leaving me with no grandparents at all (and placing my parents in the front line facing the eventual void that we all fear less when there’s someone around we deem likely to fall into it first), I was in some way taking their collective, now fully vacated, place in the Camp of the Israelites. In that thought, I included them all—the grandfathers I never met, the grandmother I only vaguely recalled from my days as a toddler, and my maternal grandma (of whom I’ve written in this space many times with respect to her Bensonhurst-blue hair, her gentle demeanor, her considerably artistic talent, and her almost unbearable overheated apartment on 84th Street in Brooklyn)—and felt that, by stepping forward into at least theoretical adulthood, I was ensuring both the continued existence of my own family and, in some extended sense, of the Jewish people itself.

This latter notion I didn’t come up with on my own—no thirteen-year-old is that precocious—but was rather planted in my brain as a result of something the rent-a-rabbi provided by Riverside—whom I met the day of my grandmother’s funeral and never saw again and whose name I can’t recall if indeed I ever knew it—said to me in the limousine that took us from the I.J. Morris Funeral Chapel on Flatbush Avenue in Brooklyn to the Beth David Cemetery in Elmont. I was listening carefully, impressed that he was even talking to me at all as we drove eastward down the Belt. And, even if I was still unable to detect any nascent stubble where I was certain my sideburns were supposed to be, I got the message. When people ask, as they occasionally do, when I decided to become a rabbi and to spend my life in the service of the Jewish people, the formally correct answer is “years later, when I was already in college.” But that answer only records when the plant blossomed, not when the seed was planted.

But I digress. My bar-mitzvah took place on a beautiful May day. Strangely, the entire operation was conducted on foot. We walked from our apartment house to the Forest Hills Jewish Center for the service not because we were strictly—or even vaguely—Shabbat-observant, but merely because my father was certain that we would have ended up parking further away from the synagogue than we actually lived. (Anyone who lives or ever has lived in Forest Hills will attest to the reasonableness of that thought.) And then, when the service was over, my parents led a kind of Cohen Family Parade down Queens Blvd. to the Stratton Restaurant, in its day a well-known roast beef restaurant with an interior vaguely reminiscent of an English pub (or, rather, what its owners must have hoped their customers would imagine British pubs to look like) and today a branch of the TD Bank. We were entertained, as I recall, by a three-piece band, a trio consisting of a bass, a piano, and a drummer. We sat in pre-assigned seats, as indicated on an elaborate table created by my mother herself on a large piece of oak tag using blue and red magic markers. There were supercool cylindrically-shaped boxes of matches on each table, each covered in yellow velvety paper and emblazoned with my name and the date of my bar-mitzvah in fancy gold letting on the side. And then, when it was all over and all the other guests had left, my parents, myself, and my Aunt Ruth and Uncle Herb walked home to our apartment.

Where my mother and her sister went off to once we got home, I don’t remember. But my father, my uncle, and I repaired to my bedroom to tally up my checks and to inspect the rest of my gifts. And then eventually my aunt and uncle also left—in their white Cadillac convertible with red leather seats, which the thirteen-year-old me deemed even cooler than the yellow match box cylinders—and I was home alone with my parents, a boy no longer (or at least not in the sense I had been only a day earlier) and more or less ready to begin the rest of my life.

After all these years, when I think back on that day in May of 1966, I’m stunned by how much has changed in my life and in the world since then. It was a tumultuous month, that May. The siege of Da Nang in South Vietnam was underway in full force. The Cultural Revolution in China was just beginning. Bob Dylan had just released Blonde on Blonde. The number one song on the radio was the Rolling Stones’ “Paint It Black.” (The thirteen-year-old me was about to become a huge Stones fan.) Cuba was under martial law as the Cubans awaited an American invasion that never came. LBJ was in the White House. John V. Lindsay, whom my father particularly reviled, was mayor of New York City. It all seems like such a very long time ago!

And yet it also doesn’t. From the vantage point of half a century’s worth of days, something like 18,250 of them, I can see the trajectory of my life more clearly than ever before. I can understand that, although it was weightless, imperceptible, and invisible, the mantle that was settled on my shoulders as I stood before the incredibly byzantine, slightly scary Ark of the Law on the bimah at the Forest Hills Jewish Center and tentatively read out the haftarah that morning that recounted the birth of Samson was totally real. I didn’t perceive it at the time. I was more focused on being annoyed—this I do remember clearly—that the other boy having his bar-mitzvah that morning and with whom I thus had to share my big day, was also named Martin…which led to people amusingly confusing us throughout the morning. (My parents thought I was overreacting, particularly since even I myself couldn’t say why I was making such a big deal out of what was obviously a mere coincidence.)  But although I could barely perceive what was happening to me at all, I now see myself on that day crossing a line back over which I haven’t ever crossed or even, speaking honestly, thought to cross. Somehow, I actually did become a man, or at least a pre-man, on that specific day and in that specific place.

The haftarah was also a coincidence, merely the lesson from the Prophets that went with the Torah portion assigned to me because it went along with the Shabbat that itself had been assigned to me. But sometimes even the most fully coincidental happenstance can lead somewhere profound and real.

In the story as told, a man appears out of nowhere and tells the wife of one Manoah that, although she has never in the past been able to conceive a child, she will yet be a mother and that the child she conceives will devote his life to the service of God. Later, the man (an angel, it turns out) repeats the same message to Manoah himself. But when Manoah demands that the man reveal his name, he responds that the request cannot be honored, that his name is not merely unknown but actually unknowable. And, with that, he steps into a flame rising from Manoah’s makeshift altar and, riding it aloft, ascends to heaven. The best part is that the man’s promise actually comes true. Manoah’s wife conceives and eventually gives birth to a son, whom his parents name Samson. And then we get to the bottom line: va-yigdal ha-na·ar va-y’varkheihu ha-shem. The boy grows up and God blesses him in every way.

I lack Samson’s physical strength. (Who doesn’t?) But we only children tend to stick together and for that reason alone I’ve always felt connected to Samson’s story. I don’t suppose my future existence was announced to my parents by an angel. But I do know that my parents, both of them, were not at all sure they would ever produce a child until they finally produced me and that, in this just like Samson, I too was born to my older parents in the context of expectation and mild amazement. And I too grew up to be blessed by God in every imaginable way: with marriage and children, career and vocation, creativity and productivity, purposefulness and a deep, abiding sense of wonder that I ended up charged with doing so much and serving so many…and that now, after all these many, many years, I still feel myself in the middle of the journey, nel mezzo del cammin di nostra vita, proud of the past but far more filled with curiosity and hope, still, about what the future might yet bring.


These are the thoughts I bring to this fiftieth anniversary of the day on which I stepped into manhood and took my place in the House of Israel not as my parents’ child but as a man in my own right. I didn’t look like a man. I probably didn’t act much like one either. But as the mantle settled onto my shoulders, I was altered by the moment…and thus became myself long before even I had any idea what that would or could possibly come to mean.

Thursday, May 19, 2016

Obama at Hiroshima

President Obama is planning to visit Hiroshima on his forthcoming trip to Japan next week to attend the Group of Seven meeting at Ise-Shima, and thus to become the first sitting American president to pay a visit to one of the only two cities in the world ever totally to be devastated by a nuclear bomb. (Nagasaki, of course, was the other city. The other presidents to visit Hiroshima were Richard Nixon in 1964 and Jimmy Carter in 1984, the former before he became President in 1968 and the latter after he left office.)  The G-7 has its own agenda, obviously. But the decision to visit Hiroshima calls for consideration in its own right.

Presumably to head off criticism in advance, the White House has announced in no uncertain terms that the President will not apologize for the American decision to use atomic weaponry to end the Second World War when he visits Hiroshima. Nor, indeed, has any other of our other post-war presidents done so, although President Eisenhower’s publicly-expressed regret for our nation’s use of “that awful thing” to bring the war to a close probably came the closest. But his off-hand expression of regret was hardly an apology, nor did anyone (including most definitely the Japanese) take it that way.

My own feelings about Hiroshima are complicated. On the one hand, the loss of civilian life was truly horrific. About 140,000 civilians are thought to have died as a result of the bombing of Hiroshima on August 6, 1945, about half of whom died on the day of the attack itself. (In addition, about 20,000 Japanese soldiers also died on that day in that place.) An additional 80,000 died as a result of the bombing of Nagasaki two days later, also half of whom died instantly.  Whether or not the decision to use atomic weapons against Japan was justified depends on the vantage point of the person asking the question, but no one can dispute the fact that the attacks were fully successful: Japan surrendered unconditionally not even a full week after Nagasaki and with that ended a war that took the lives of somewhere between seventy and eighty-five million people, constituting more than three percent of the entire population of the planet.

Comparing the number of dead at Hiroshima and Nagasaki to the number of dead at Pearl Harbor—by comparison a mere 2,471—is, to say the very least, ridiculous: the men and women who died at Pearl Harbor were murdered—executed in cold blood by a nation that was specifically not at war with the United States—whereas the dead at Hiroshima and Nagasaki were citizens of a country that not only was at war with the nation that attacked them, but which had itself initiated the war with its horrific surprise attack on our naval forces in Hawaii in the first place. The number of people murdered by the Japanese regime between the invasion of China in 1937 and the end of the war—5,400,000 by most estimates, to which must be added the more than half a million POWs who died in Japanese custody and the tens of millions who died in China, the Philippines, and other countries occupied by the Japanese of various combinations of disease, deprivation, and occupation-induced misery during the war years—seems a more reasonable figure to discuss in this context, but even that gargantuan figure doesn’t really work: the more than 300,000 civilians that the Japanese executed at Nanking alone during the winter of 1937-1938, for example, were killed for no military reason at all, whereas the bombing of Hiroshima and Nagasaki can reasonably be said to have saved all those civilians and soldiers, including most definitely American and other allied soldiers, whose lives would have been forfeit in the land invasion of Japan that would surely have ensued had the war not ended when it did. Whether more or fewer Japanese civilians would have died in the course of a massive land invasion than died in Hiroshima and Nagasaki is, of course, unknowable. What is certain, on the other hand, is that they would surely not have been the same people who died on those days in August 1945…which means that uncountable numbers of Japanese civilians who survived the war also owe their lives to the American decision to do whatever it was going to take to bring the war to an end.

What I keep reading, including in comments by Benjamin Rhodes, President Obama’s deputy national security advisor, is that the President’s decision to visit Hiroshima is related more to his vision for a nuclear-free future than to his feelings one way or the other about the ultimate rightness or wrongness about President Truman’s decision to authorize the attacks of August, 1945. I suppose that make sense—nuclear weapons have only been deployed twice in the history of our planet, and so the most dramatic place for the President to make what will probably be his last major appeal for nuclear disarmament would have to be one of the sole sites, other than test sites, ever to experience the actual force of a nuclear explosion.  And yet, even though that thought has a certain cogency to it, any number of factors—including not least of all our current relationship with Japan—will prevent the President from speaking openly and fully honestly about the events of August 1945 and require that he focus himself instead on the horrors of war generally without indicting—and certainly not forcefully—the Japanese as the authors of their own debacle. Nor will he feel free to opine, even obliquely, that the barbarism that characterized the behavior of Japanese forces in the lands they occupied during the war—and the millions of dead in those countries, and particularly China, at their hands—simply required that the war be ended by whatever means were available to whomever could deploy them and that, in the end, nothing else mattered more. Even less likely is the possibility that he will choose to quote President Eisenhower’s famous remark that the sole immoral act possible when fighting against demonic enemies like Nazi Germany or Imperial Japan would have been to lose the war.

But even if the President could speak totally openly, does it really behoove us to enter into the kind of ghoulish calculus that would likely follow his assertion, unproven and unprovable, that more lives were saved than lost by President Truman’s decision to deploy nuclear weapons against Japan. My guess is that it probably is true, but do we really want to go there? The civilians who died at Hiroshima were not personally responsible for Pearl Harbor or the rape of Nanking. They were, as is inevitably the case for so many in wartime, simply in the wrong place at the wrong time—innocents, including babies and young children, who were incinerated to end a war they personally didn’t start and in which they, speaking specifically of the children, didn’t play any role at all.

Not to regret their deaths, let alone actually to blame the dead for their own fiery demise, would be an example of moral depravity. And I say that as someone who thinks President Truman did make the right decision to bring to the war to an end with the means he had available to him and who considers himself a moral, decent person who would never step over dead babies on the way to perform even the most moral or praiseworthy act. The moral conundrum is acute, then: to approve of the bombing means to look past the victims and in essence to blame their fates on their own nation’s leadership, but to wave away their deaths as mere collateral damage in an otherwise fully justified military action requires that the waver-away be made of sterner stuff than I personally am. In my own opinion, since the President will be constrained both by the strictures of good taste and the realpolitik of the day from speaking totally openly at Hiroshima—and since the moral puzzle is insoluble, yet to speak on the subject at all is by definition to present at least obliquely one side of the argument as one’s own—it would probably have been a better idea not to go at all.

When I was a senior in high school, I read John Hersey’s Hiroshima. Originally a full-issue-length essay that appeared in The New Yorker in 1946 but subsequently published and republished many times as a stand-alone book, Hiroshima focuses solely on the individual fates of a handful of people present in Hiroshima when the bomb exploded and makes it more or less impossible to think of the people incinerated at Hiroshima and Nagasaki as a faceless mass of indistinguishable dead people. Being a child of my time, I read Hersey’s book in the light of our nation’s ongoing experience in Vietnam. But being as well the teenaged version of my future self, I also read it in light of Auschwitz and resolved never again to speak of “the dead” without recalling that the gas chambers were not filled with “people” or with “victims,” but with an endless number of individuals, each an entire universe, each a world of passion and culture, of intelligence and potential. Nor was it again possible for me to think of the dead at Hiroshima as a faceless mass of unfortunates.

As I contemplate President Obama’s coming visit to Hiroshima, I find my mind turning—slightly unexpected and probably not entirely fairly—to the events of December 7, 1970, when Willy Brandt spontaneously fell to his knees before the monument marking the spot that was once the entrance to the Warsaw Ghetto as an act of personal remorse and national contrition. “Under the weight of recent history,” he later explained, “I did what people do when words fail them. In this way, I commemorated millions of murdered people.”

In a sense, Chancellor Brandt had it easy. He represented the nation that perpetrated evil in the world on a previously unimaginable level and brought unprecedented levels of human suffering to countless innocents. It must have been wrenching for him to go to that place and do that thing…but he did it and his reputation as a man of honor was established permanently, at least in my mind, on that day and at that specific hour.  But President Obama is facing an altogether more vexing challenge. Like Willy Brandt, he represents a nation that brought about the deaths of countless innocents. But he does not represent a nation that acted indecently or immorally at all, but, just to the contrary, he represents the nation that defeated the forces of demonic evil and helped establish democratic governments not only in the countries occupied by the fiends and their allies, but in the perpetrator nations as well. He has, therefore, nothing to apologize for…and yet to use that truth as an excuse for looking away from the horrific loss of life our best efforts to win the war brought to people who were neither the leaders of their nation nor the perpetrators of their horrific policies in the countries they occupied—that would not behoove the leader of the Free World even slightly.


And it is that precise conundrum I wish the President had chosen simply to avoid by flying directly home after the G-7. To walk the tightrope before him and to speak honestly and candidly about the legitimacy of America’s efforts to win the war at all costs, and at the same time neither to demean the civilians who died at Hiroshima and Nagasaki nor to blame them for their own deaths—that is a challenge that I’m not sure can be successfully met at all. Willy Brandt was correct that there are some things that really cannot be said in words. But the gesture he chose to give voice to thoughts that could only be expressed outside of language is certainly not one available to the President, and neither does he have the option of appearing in that place but saying nothing at all.  The world will be listening next week to what he does choose to say…and so will the ghosts.