Thursday, February 4, 2016

Compromise at the Kotel

It’s a bit hard to know what to make of this week’s historic compromise regarding the use of space at the Western Wall in Jerusalem by non-Orthodox prayer- and tour-groups. Certainly, that any compromise at all came about is remarkable. (The Ḥareidi types who run the show at the Kotel—the Western Wall—and anywhere in Israel where the Chief Rabbinate holds sway are not renowned for their willingness to cooperate when on-paper conciliation might conceivably lead to actual on-the-ground concession.)  And, yet, there they all were on the front pages of all the Israeli newspapers and on-line news sites agreeing not to make a ruckus about a significant portion of the Western Wall Plaza—nearly 10,000 square feet, double the size of the area currently assigned to non-Orthodox groups—being assigned formally to groups independent of the Chief Rabbinate and its minions for their liberal use. (An area that size can accommodate about 1,200 people, so this really is a serious amount of space.) Perhaps to suggest the fact that this compromise should be taken as a sign of unity rather than divisiveness, there will still be one single entrance to the site. Nor has it been made clear exactly how the different sections will be labelled on public signage—an issue that will seem unimportant only to people unfamiliar with the level of almost venomous dislike that characterizes more or less any situation in which ultra-Orthodox Jews and more liberal types in Israel are obliged to reference each other in print or orally. Yet signs there too there will somehow be…and they will have to say something.  (What they won’t say is “Fundamentalist fanatics to the left and heterodox iconoclasts to the right,” however, which is ironic since that is precisely what the average secular Israeli actually does think of both parties to this week’s agreement. Saying so on a sign, however, would be contrary to the spirit of the accord. And, at any rate, such a sign would only be half right!)

The Orthodox end of the Kotel, what most of us have always thought of as “the” Kotel, will, at 21,500 square feet for both the men’s and women’s sections, still be the larger piece of property. And it also bears saying that the feel to the Orthodox space at the Kotel—the size, the location, the demeanor of the people in charge, the strictly enforced adherence to Orthodox rules in terms of prayer and Torah reading, and the absolute segregation of prayer groups by gender—will remain unchanged. Indeed, the casual visitor dropping by once all these changes are put into place who isn’t specifically looking for them will probably miss the whole thing. And that, sadly, is why the compromise had a chance in the first place: not because the parties to it are eager to embrace each other as respected neighbors or cherished brethren, but precisely because the way things will be laid out on the ground will make it more or less possible for neither group to see or hear each other, or be obliged even to take begrudging note of each other’s presence. In our country, “separate but equal” was struck down by the Supreme Court in 1954 because the situation on the ground was so much more separate than equal that the concept was deemed to be meaningless. This week’s compromise too will yield results more separate than equal, but it is such a vast improvement over the situation that has prevailed in these last years that it’s hard to see why we should not embrace it. It’s not perfect and it’s certainly not the ideal—which would be for the entire Western Wall to be free and open to all without anyone insisting that anyone else hew to standards not his or her own solely to make the insister slightly more comfortable—but that will simply never happen. So this is what we’ve got and I say we should run with it. But why it matters so intensely to so many people…that is the more interesting question, I think, and it’s the topic I’d like to address this week.

The Western Wall was never part of the actual Temple, but was one of the support walls built to keep the Temple Mount from collapsing under the enormous weight of the stupendous structure that once sat atop it in precisely the space currently occupied by the Dome of the Rock. (The Dome of the Rock itself was built in that place in the seventh century C.E. by the Umayyad Caliph Abd al-Malik specifically to stress the ascendancy of Islam by positioning its most gorgeous shrine precisely on the site of the ancient Jewish Temple.) So the Wall itself wasn’t part of the Temple…but whether there actually are physical remains of the Temple in any of its iterations—the First Temple built by King Solomon and destroyed by the Babylonians in the sixth century B.C.E., the Second Temple constructed on the spot of the first by the returnees from exile in Babylon, or the enhanced version of that same structure refurbished almost to the point of being rebuilt by King Herod towards the end of the first century B.C.E. only to be demolished by the Romans in 70 C.E.—hidden in the earth under the Temple Mount,  no one knows. Nor, mostly for political but also for practical reasons, will anyone ever know. And that leaves us with what we actually do have: the part of the western support wall that is visible to all at the Kotel Plaza today and from there south to the end of the Temple Mount and the part of the wall that is accessible to visitors only through the so-called Western Wall Tunnel that extends underground in the other direction as far north as the Via Dolorosa.

It sounds like an ancient artifact, something like the Jewish version of Hadrian’s Wall or the Great Wall of China, yet that couldn’t be less how it feels when I’m actually present in that place. And I should know because I’ve been there in almost every conceivable setting: late at night and early in the morning, in the bright sunlight and in the rain (but never in the snow that occasionally falls in Jerusalem), on Shabbat and on weekdays (and on every other holiday except for Rosh Hashanah and Yom Kippur), on Tisha Be’av (the anniversary of the Temple’s destruction both by the Babylonians and by the Romans) and on Yom Ha-atzma·ut, Israel Independence Day. My first visit was in 1974 when I was a callow youth leading a teen tour to Israel only five or so years older than my charges. (When I first visited Israel in 1966, the Old City of Jerusalem was still in Jordanian hands and thus fully inaccessible to Jewish tourists who might otherwise have entered from Israel.) My most recent visit was the day before we left Jerusalem last summer and I went there to say my prayers before leaving. I’ve been called to the Torah there and I’ve dukhened there as well—many times, actually. It is the one place in which I find myself willing to put up with those people for the greater good of worshiping in that place. And it is the one place in which, despite my usual inclination to anchor any ruminative thinking about the future of the Jewish people in anxiety and fretful apprehension, I find myself unworried about the future and secure in God’s promise always and ever to watch over the House of Israel and the Land of Israel. This is not how I usually frame my thinking about the future, but it’s how I feel when I’m there. And that alone is why I generally gravitate towards the Kotel as soon as I arrive in Israel. Even after all these years, it’s still hard to describe the feel of the place or the power it somehow exerts on me almost as soon as I catch a glimpse of it from afar. But that power is real and I succumb to it always.

When I was in graduate school, I was very taken with the four-volume work called the Ḥemdat Yamim, a work of unknown authorship first published in Istanbul (then Constantinople) in 1735. Of a similar genre with other works of kabbalistic ethics that also appealed to me greatly in those days (and which haven’t entirely lost their allure for me even after all these years), the Ḥemdat Yamim managed to conjure up—for me personally at least—the image of a kind of Jewish life that was beyond appealing: rich, tolerant, intelligent, honest, fully observant without being exclusionary or arrogant, and at least theoretically attainable by regular people such as myself and ourselves. Here and there, though, the author asks a lot of his readers. For example, one of the most famous passages in the book describes the experience Rabbi Abraham ben Eliezer Halevi Berukhim (1515-1593) had at the Kotel in 1571. He was sick unto death in those days, sinking fast and unable to find a doctor to restore him to good health. And it was then, in what would likely otherwise have been his very last days, that his teacher, Rabbi Isaac Luria, the holy Ari, told him to go to the Kotel and there to encounter the Shekhinah, the living embodiment of God’s presence on earth. And there he went and, amazingly, he had exactly the experience of seeing the Shekhinah wandering down from the Temple Mount, Her head uncovered as though in mourning for Her temple. Seeing Her in such distress, he burst into tears and ran for cover into a nearby house only to miss the doorway, run into a wall, and knock himself out. And then he awoke to find his head cradled in the Shekhinah’s lap as She dried his tears and told him to calm himself, that he wasn’t done with this life after all, and that he would recover. And that is what happened exactly: he returned to his master in Tzfat and lived another twenty-two years.

I’m not sure why exactly that story spoke to me so deeply, but it has stayed with me from the moment I first read it decades ago when I was first encountering the Ḥemdat Yamim. Admittedly, it sounds like just a folktale, like the kind of fable Jewish people once told easily about their rabbis and those rabbis’ disciples. It sounded that way to me too…until I returned to the Kotel for the first time after reading it on my honeymoon. Joan was in the women’s section and I was in the men’s. (What God had put together, the Kotel had no problem setting asunder.) And so there I was trying to daven, but all I could think about was that story and how entirely plausible it felt to me as I stood there in the shadow of the Wall and felt myself fully suffused with the palpable presence of the divine. It really is hard to explain what I mean. I’m not sure I can find the right words even to explain it to myself fully, let alone even not fully to others. But there is holiness in the world and then there’s holiness, the kind you can feel spreading over you when you find yourself in exactly the right place at the precisely correct moment. For me, it was that first visit to the Kotel on our honeymoon that sealed my fate: even though I was still working on my dissertation and was formally preparing myself for a career in academics, I knew at that specific moment that I would end up working in the congregational rabbinate. It took me a while to talk myself into acting on that decision—I accepted my first pulpit only six years later—but it was that specific moment at the Kotel as I filtered what I could remember of the Ḥemdat Yamim’s tale through the actual experience of standing before the Kotel as a married man and a rabbi and an almost Ph.D. that sealed my fate.

Over the years, people have often asked me when it was that I knew I wanted to be a rabbi. Now you all know. Is it odd that that moment came years after I was ordained, after I had already spent all those years in rabbinical school studying for ordination? I suppose it is! But that is how life truly is: sometimes a road to travel on, sometimes a barrier to be turned back by…and sometimes, if you are lucky enough to be standing in exactly the right place at the precisely correct moment, a gate to step through

Thursday, January 28, 2016

Remembering the Struma

It’s a truism, I suppose, that sensory perception—the general rubric for the various ways we experience the world through our senses—functions in the world outside ourselves more like a straw convention that makes us feel linked to each other without there being any actual proof for real commonality of experience.  So you and I agree that roses are red because “red” is the word we English-speakers use to denote things that appear to us to be of that specific color. But knowing—not just asserting, but actually knowing—what you see when you look at a red rose and being certain that it’s exactly, or even inexactly, what I myself see—that is an entirely different proposition. We could probably also agree that potato chips taste salty, but what that means is that our taste buds, when they come into contact with salt, somehow encode that experience in the kind of electronic impulses that our brains can decipher and prompt us to label with that specific word, a word that bears meaning only because the toddler-versions of ourselves were told by our parents to use that word to describe that thing. But to know with any actual certainty that your brain interprets that signal exactly as mine does and that we actually are sharing not only the word but the actual experience of having exactly the same taste experience—who could ever say that with any certainty?

As a result, we live in a world that feels linked by common experiences expressed in common language…but it’s only the language that can truly be verified as shared: the experience just feels that way but without any empirical data proving that we actually are seeing or tasting (or hearing, etc.) the same thing. And what’s true for people is also true for nations, I believe. Or perhaps I should speak only of what I truly do know: that there is a certain false commonality of experience that makes the world able to contextualize specific events in Jewish history so as to make them feel like the Jewish equivalents of other events in other people’s histories…but which, from the inner vantage point of the actual members of the House of Israel, feel totally unrelated to those events in any but the least profound way possible. We use the same words to describe these things because words are all we have to describe anything. But that hardly means that we experience them in the same way.

These thoughts came to mind as I read the other day of the death of David Stoliar, the sole survivor of the Struma. His name was unfamiliar to me. But the back story that makes his story simultaneously miraculous and horrific was well known to me…and serves in my own mind as one of those examples of experiences that feel shared because we use the same words to tell other people’s vaguely similar stories but that also feel entirely unique and unrelated to those other stories. 

The Struma itself has mostly been forgotten. Once it was a luxury yacht, a 150-foot steamer built in the mid-nineteenth century, but by the 1930’s it had been relegated to carrying cattle up and down the Danube under the Panamanian flag. And that what it was doing when several Zionist organizations, desperate to find a way to help Jews escape the Nazis, hired it with the idea of using it to bring hundreds of Jewish refugees from fascist Rumania to British Palestine, all of them men, women, and children who were almost surely going to be killed if they found no way to flee. Eventually, there were 781 passengers aboard along with ten crew members. The ship left the Romanian Black Sea port of Constanza, but the engines failed repeatedly. At one point, the passengers—who were mostly robbed of their cash and valuables when attempting to board the ship—were obliged to give up their wedding bands to pay the captain of a passing tugboat to repair their engines. Sanitary conditions were abysmal: there were eight toilets for almost eight hundred people. Eventually, the engines failed decisively. For a while, the ship just sat there…and then, eventually, the Struma was towed to Istanbul. And that is where the tragedy began in earnest.

The British refused to grant the passengers visas to enter British Palestine. The Turks refused to allow the passengers to disembark at all. When the British—under enormous pressure—finally agreed that children between the ages of eleven and sixteen—a tiny percentage of the people on board—would be given visas for Palestine, it meant nothing because they refused to provide a ship to transport them further and the Turks refused to allow them to leave the ship to find land transportation. Finally, the Turks, eager to be rid of the whole messy incident, forcibly towed the boat through the Bosphorus and out into the Black Sea. Since the engines were completely dead, the Turks simply abandoned the ship in the middle of the sea and returned home. There were two lifeboats on board and no life preservers at all. What the Turks thought would happen next is not recorded, but doesn’t seem that hard to guess.

That guess would have been wrong, however, because there was a huge explosion aboard the ship on the morning of February 24, 1942, that caused it almost immediately to sink. Most passengers and crew went down with the ship. Some clung to pieces of wreckage only to die in the sea when no rescue vessels of any sort came to help. Of the 791 aboard (including about 100 children) in fact, only two survived in the water for more than a day: Lazar Dikof, the ship’s First Officer, and a teenaged boy. By morning, Dikof was dead. The boy, now the sole survivor of the Struma, was eventually rescued by some civilian Turks who passed by in a rowboat. And that boy was David Stoliar, the man who died twenty months ago at age ninety-one in the little town of Bend, Oregon, where he had lived for many years. His death went unnoticed, reported only in local Oregonian newspapers and in Haaretz. 

Who sank the Struma? For many years, it was an open question. But eventually it was determined unequivocally that the ship was attacked by a Soviet submarine acting in accordance with standing instructions to sink any neutral ships that entered the Black Sea to prevent them from bringing supplies that could eventually have reached Germany. Whether the commander of the sub, D. M. Denezhko, knew he was essentially murdering almost eight hundred civilians is unknown. And so the Struma began its final journey, the one from the front pages of the world’s newspapers into oblivion, its very name unfamiliar to all but scholars of the Shoah. 

For almost sixty years, Stoliar said nothing, preferring to live his life out in peace without reference to the horror he experienced as a young man. But then, in 2001, the Canadian film director Simcha Jacobovici (who in a different lifetime was once one of my younger brother-in-law’s Hebrew School teachers at the Holy Blossom Temple in Toronto) found him and coaxed him into appearing in his documentary about the disaster called The Struma. There was, briefly, an awakening of interest in the story…and then it too disappeared beneath the waves of history forgotten and that, until the other week when David Stoliar’s death was reported on at length in the New York Times, was that. (To see Robert D. McFadden’s lengthy story about Stoliar’s life and death, click here.)

I write today not merely to recall the Struma, however, lamentable though it may be that it’s been so largely forgotten. Nor do I write publicly to regret the fact that the Struma somehow never got the Hollywood-style treatment that the S.S. St. Louis and its doomed passengers got in the 1976 movie The Voyage of the Damned with its all-star cast and huge P.R. budget. But more than taking note, yet again, that Emily Dickinson was entirely right about fame being “a fickle food upon a shifting plate,” I think of the Struma as one of those examples of events that sound similar to others because we use the same language to describe them but which feel entirely different to the people whose legacy those events actually constitute.

There have been other terrible disasters at sea, obviously. Everybody knows about the Titanic and the Lusitania. (There’s a shifting plate here too, however: how many New Yorkers have heard of the General Slocam, the passenger steamboat that caught on fire and sank in the East River on June 15, 1904? 1,021 of the ship’s passengers died that day, more than in any New York disaster other than 9/11. But I can’t recall ever hearing anything about it until I started my research for this letter.) The vocabulary used to describe these events—including the sinking of the Struma—is all more or less the same. But the feelings the story of the Struma awaken in me are wholly unrelated to the other famous shipwrecks of our time. The Titanic was a true disaster, one that could and should have been averted. But the Struma is about something else entirely:  the utter, absolute powerlessness of Jewish people in the face of an uncaring world that considers their very existence a problem and their annihilation the solution to that problem. The British could easily have saved every single one of those people, but they chose to do nothing. (And that, despite the fact that they were at war with those people’s would-be murderers.) The Turks certainly could have saved them too, and even more easily—merely by allowing them to leave their barely sea-worthy boat and find shelter in Turkey from their would-be murderers—but that too was not something the Turks saw as being contrary to their own best interests. The Soviets, possessed of a mighty army and world-class intelligence services, could surely have ascertained that the cargo aboard the Struma was constituted solely of doomed souls facing death in Rumania or life anywhere at all not under Nazi domination (including the unoccupied part of the Soviet Union itself), but they chose instead to sink the boat and let all aboard drown. Problem solved!

So when people talk about the Struma using the language of shipwrecks and at-sea disasters, it sounds vaguely right. But that is not at all how it feels, at least not to me personally.

The Struma is resting at the bottom of the Black Sea, its passengers long since gone to their eternal reward and its sole survivor now too gone from the world. So is the wreckage of the MV Mefküre, a Turkish ship carrying more than 300 Jewish refugees from Romania to Istanbul that the Soviet Navy also sank in the Black Sea on August 5, 1944, murdering in the process all but five of its passengers. The same world that forgot about the Struma has also forgotten—even more entirely, if that were only possible—about the Mefküre. But I remember them both. And when I say, as I so often do, that there is no possibility of the IDF being too powerful or well-armed, that there is no rational argument in favor of Israel seeking peace by making itself less strong or less able to defend itself successfully, or that the nations of the world are being untrue to their own history by pretending not to have any idea what they can ever have done to make Israel mistrustful of their real intentions, I am remembering the Struma in my heart and responding to the image thus conjured up of military powerlessness, diplomatic impotence, and utter and absolute helplessness. It is not a picture I wish to see replicated in the future…which is why I feel so unambiguously in favor of a strong Israel possessed of a mighty army, navy, and air force. And why I feel so little inclined to join the hand-wringers and nay-sayers for whom the Struma was just a boat and its passengers just victims of a world gone mad. Surely, the sinking of the Struma and the Mefküre were tragedies, but they are also potent symbols, their stories not only worth remembering as examples of terrible things that once occurred, but also worth taking to heart as a lesson about the world and the place of Israel among the nations. It is a sobering lesson, the one inspired by those sunken symbols. But that only makes the lesson unnerving and anxiety-provoking, not untrue. And that is what the Times’ belated obituary of David Stoliar inspired in me and moved me to want to write to you all about this week.

Thursday, January 21, 2016

Plus Ça Change, Baby….

Everybody knows Jean-Baptiste Alphonse Karr’s famous line regarding the illusory nature of change and what it means: plus ça change, plus c’est la même chose is almost always used correctly as a literary way to observe that the more things appear to change, the less they really change. That change itself is illusory, generally denoting a shift in the cosmetic while leaving the essential untouched. That when someone tells you society has changed and adds the words “and we have to change along with it,” that person is more often than not trying to justify some wished-for innovation in the way we live by presenting it as an inevitability rather than attempting to demonstrate why it actually is a good idea. Why the French is so often misattributed to Proust, I have no idea. But the idea itself fascinates me and is what I’d like to write about this week.

The specific kind of change that Americans love the most is progress. But unlike         “change,” the word “progress” is a loaded term, carrying along with its basic meaning the intimation of approval, the suggestion that the development in question is not just change for its own sake, but change for the better as society attempts to allow its reach to exceed its grasp and thus to self-improve through the sheer force of its own will to morph into a finer iteration of its earlier versions. That the way we live now is dramatically different now than the way people lived a century ago in the year of my father’s birth, let alone in 1816 or 1716, hardly seems worth bothering to demonstrate with examples. But whether things have really changed other than in terms of the specific way we wash our clothes or send each other letters—and if those changes can be labelled as true progress (that is, as the kind that results in a profoundly better society, not merely a different-looking one)—that is the question that engages me. Email is obviously a huge advance over snail mail, just as airplanes are incredibly more efficient than stagecoaches…but is efficiency a subcategory of true progress or is it just another example of the kind of change that, to go back to Karr’s epigram, doesn’t really change anything at all?

I’ve been reading and listening lately to an interesting debate between two professors at Northwestern University. On the one side, we have Professor Robert Gordon, author of The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War, published just this month by Princeton University Press. (To listen to a TED talk by the author summarizing the book’s findings, click here. To read more about the book itself, click here.) Professor Gordon’s argument, quickly summarized in the lecture and meticulously laid out in the book, is that the dramatic changes in American life that characterized the hundred-year stretch between 1870 and 1970 were real and profound, but that they will not be replicated in the years to come. And, that being the case, that it would be foolish for our nation to develop policies that blithely assume that the future will replicate the past merely because everyone wishes the rate of progress we have experienced in the relatively recent past would continue unabated into the future.

There are, according to the professor, several different reasons for this, but the one that interests me the most has to do with the concept of innovation, a subcategory of change related to, but not quite synonymous with, progress. The inventions that totally altered life in the one hundred years between 1870 and 1970, the professor argues, have truly changed society through and through and not just with respect to its outer appearance. But this kind of progress cannot be endlessly replicated with ever-newer gadgets because those machines addressed needs that, although they could conceivably be addressed even more efficiently (i.e., by having washing machines that do the laundry even faster than current models can manage), still cannot—or at least will not—be addressed with revolutionary technologies that require the population completely to re-conceptualize the tasks at hand and the way in which those tasks are performed. (In other words, the goal of doing the laundry will always be to take dirty clothing and make it clean. But now that the process is automated and efficient enough, the likelihood of anyone inventing and marketing an entirely new, dramatically better, way to accomplish that same goal is minimal.) Other kinds of changes could have been dismissed as mere upswings in efficiency, but taken as an aggregate have altered so much of how we live to warrant describing the century in question as a true boundary between what came before and what has and will come after.

Within the hundred years under consideration, for example, people stopped having to shop daily because of the introduction of electric refrigerators and this dramatically affected the nature of commerce in this country. The invention of the electric elevator made it possible to build buildings with many, many more floors than would have been possible if people had to walk up to their apartments, which innovation dramatically and permanently altered the nature of urban living. The invention of the internal combustion engine allowed America to divest itself of most of its horses, who in the nineteenth century stayed alive by eating produce grown on a full quarter of American farmland, and this in turn permanently changed the pricing of foodstuffs in the marketplace in a way that had profound implications for family life. The invention and installation of underground pipes allowed people no longer to have to walk to public wells to draw or pump their water, which changed the nature of neighborhoods and the level of hygiene that characterized urban, suburban, and even rural living. None of these things will be redone, let alone undone, because they all work well enough as is…and simply making them work better or more efficiently will not change society in anything like the way their initial introduction did.  So, to sum up, the concept is that there surely is such a thing as progress…but it is something that alters society when a perfect storm of factors materializes, not an inevitable feature of societal life that we should expect to characterize the future merely because it characterized the past.

On the other side of the argument is Joel Mokyr, a professor of economics at Northwestern, who argues that innovation is an endlessly replicating thing, that the flaw in Professor Gordon’s thinking is that he imagines the future in terms of the past. (For a summary of his thinking in this regard, click here.) But is that reasonable? The truth is that the large majority of the most interesting innovations, including ones that changed entire industries, derived not from needs long felt and otherwise satisfied, but by needs never previously perceived at all that would have seemed unimaginable to earlier generations. The internet itself, for example, is not something someone invented to speak more efficiently to a need that generations past all dealt with, only less well and far less efficiently, but rather something entirely new, something that developed out of emerging technologies that suddenly came together to create something unprecedented that truly has changed the face of society as we know it.  Are cell phones really just more efficient models of the huge plastic telephone attached to the wall of my parents’ kitchen? You could argue that, I suppose…but my parents’ phone didn’t take pictures or play music. You couldn’t read books on it, let alone daily newspapers and magazines. You certainly couldn’t speak to it and expect the telephone itself to answer you clearly and correctly when you asked it when the next train to Penn Station will leave Mineola or what some stock is trading at or what the weather is like in Auckland.  So it’s only slightly correct to refer to my phone as a direct descendant of my parents’ wall unit, reasonable only in the same way you could refer to the space shuttle as a descendant of a birch canoe.  All that being the case, it seems pointless to imagine that progress will slow when what we mean is merely that we have no idea what form it will take or in what direction it will take us. I suppose we probably have come about as far as we can with washing machines and dishwashers: new versions will just do the same thing faster and better not really differently. But what has that to do with the kind of progress that builds on itself to create not improvement of past things but entirely new things of which previous generations could not possibly have conceived?

I’d like to suggest a different way to think about the issue. To me, what impresses me about the past is how similar, not how different, it is from the present. The computer on which I am writing this would surely have seemed unfathomable to my great-grandparents, but when I read the books that were published in their lifetimes—the great novels, say, of Henry James or Leo Tolstoy—I’m struck not by the enormous differences between the world depicted in those works and our world today, but by the similarities. Yes, the people they write about don’t have telephones at all, let alone cell phones. They don’t have indoor plumbing, most of them…and their homes are illuminated by gas lamps or even by candles. But the issues they face—and the people they are—are not really all that different from the ones we ourselves face and who we ourselves are. The issues that form the narrative core of these books—the relationship between husbands and wives or between parents and children, the delicate nature of friendship, the misery of betrayal, the consequences of judging character poorly, the yearning for adventure, the deep need to feel anchored and safe in one’s own home, the complicatedness of sibling rivalry, the complexity of adolescence, the excitement of sexual awakening (and its attendant woes and insecurities), the yearning for love, the need for friendship, the  mixture of satisfaction and terror that results from self-knowledge—these are the same issues we face today, all of us. Nor are they faced, encountered, or dealt with differently because my iPhone 6S is a million times more sophisticated than the phone in my parents’ kitchen, let alone anything in a Henry James novel.

I suppose it all comes down to what we mean by progress. If we mean the introduction of ever more sophisticated items into the world of things, then there has been immeasurable progress since my great-grandparents’ day and the debate about whether this will now be a permanent feature of society or not is one worth having. But if we focus instead on the landscape of the human heart…then it’s hard to see in what profound way the world my great-parents inhabited differs too profoundly from this one in which we live. I can’t even begin to imagine what gimcracks and geegaws my own great-grandchildren (please God) will have in their pockets, but my guess is that on every truly profound level they will be just like us, struggling to find a place in the world, to invent themselves, to learn how to love. Karr was wrong and right: wrong because things really do change…and right because, for all things change profoundly and meaningfully, they also remain just as they were as we make our way from cradle to grave along the landscape of human life itself and try to negotiate the journey successfully and well.

Thursday, January 7, 2016

Living in History and Paying the Price

Like most of my readers, I suspect, I found myself somewhere between surprised and unnerved by the sudden crisis in Saudi-Iranian relations, but mostly I felt confused. Both regimes are, after all, run by radical Islamicists who would appear—at least from this distance—to be each other’s natural allies. Both nations, each in its own way, are police states that feature as part of their national culture what the vast majority of Americans would consider to be a severely deficient understanding of the basic human rights that animate our own culture. Both are surely united in their hostility towards Israel—despite the occasional rumors that the Saudis might be softening their stance—and even if their relationship towards our own country is not at all the same, that hardly seems to constitute a good enough reason for them to dislike each other so intensely. Must the friends of our enemies also be our enemies? Is this somehow all about us? Or is it primarily about them?

Obviously, and leaving out for the moment what this possibly could have to with our own nation, the whole thing has to do—at least in large part—with the apparently unbridgeable chasm between Sunnis and Shiites, the Saudis belonging to the former group and the Iranians to the latter. But that too is confusing to me because the basic distinction between the two groups is rooted in a dispute anchored in distant times that, even if never truly resolved, by all rights ought to have vanished long ago into the swirling mists of forgotten history.

The story itself is somehow complicated and simple at the same time. The prophet Muhammad died in 632 C.E. without formally passing along the mantle of spiritual leadership to a worthy successor and thus leaving Islam—then a small but growing sect made up primarily of the Prophet’s personal followers—without a leader who could claim the ultimate authority to lead Muslims in his stead. In retrospect, this was a huge error. At first, one of Muhammad’s aides, one Abu Bakr, succeeded him as leader, but others felt the Prophet had indeed designated a successor in his own son-in-law, a man named Ali. Eventually, Ali did become caliph (which means “replacement” and specifically denotes the individual replacing Muhammad) and, after he was assassinated (he was stabbed to death in a mosque in present-day Iraq), his sons, Hussein and Hasan, stepped up to take their father’s position in the Muslim world. Both were eventually murdered as well, however, and so their supporters became known as Shiites, the anglicized version of the Arabic words that mean “followers of Ali.” And their position was relatively clear: the earliest Shiites promulgated the opinion that the Muslim world should only be led by someone physically descended from Muhammad. The rest of the Muslim world, devoted to the Sunnah (which is the Arabic word for “tradition,” in this case denoting specifically the Prophet’s tradition) adopted the alternate opinion that their leader should be someone characterized by piety and learning, i.e., by devotion to the Sunnah, but not necessarily a blood relation of the Prophet or one of such a relation’s descendants.

And so began the schism that continues to fuel the fires of the Middle East. Nor do the sides stack up evenly: more than eighty-five percent of the world’s billion and a half Muslims are Sunni, as is the leadership and royal house of Saudi Arabia. The Iranians are Shiites, as are their leaders and client groups across the Middle East. The only Shiite-majority countries in the Muslim world, in fact, are Iran, Iraq, Azerbaijan, and Bahrein. All the other major Muslim countries have Sunni majorities, including Turkey, Pakistan, India, Bangladesh, Indonesia, and Malaysia. (The numbers for our own country are slightly confusing because, although a large majority of American Muslims are Sunni, most Arab Americans are Christians, not Muslims at all.)

That whole story hardly seems enough to warrant the level of intense vituperation we have been witnessing over the last week—the Sunni Saudis beheading a Shiite sheikh who militated for the rights of Shiites in the Eastern Province of the kingdom and the Shiite Iranians launching a violent attack on the Saudi embassy in Tehran.  Nor do Shiites and Sunni Muslims differ too dramatically in terms of their beliefs—they both revere Muhammed, consider the Quran to be a book of divine revelation, and they both follow the five tenets of Islam: prayer, charity, faith, fasting during Ramadan, and the obligation to undertake a pilgrimage to Mecca in the course of one’s lifetime. From outside the tent, they don’t seem particularly different at all! And yet the animus is so real as to be in the process of altering the whole tableau of Middle Eastern politics almost before our eyes.

Maybe the whole issue of succession seems so odd to become so angry about precisely because, in our American meritocracy, it goes without saying that leaders are correctly chosen because of their moral worth and because of the positions they espouse. Indeed, in our American republic, only two sons of presidents have gone on themselves to become president, and neither inherited the position from his father. Even in the modern monarchies of Western Europe, in fact, the notion that the crown passes from the sitting monarch to that monarch’s heirs is only tolerated because the monarchs in question have no real political power.

It all sounds so foreign and odd. But that’s only because we forgotten to remember much of our own history—and thus to know that the course of Jewish history too was altered by a violent war of succession…and it too was characterized by multiple assassinations, ferocious street demonstrations, and civil unrest so violent that it set in motion the events that eventually cost the Jewish people their sovereignty. It’s a story worth telling!

We think of the Maccabees as heroes, as the guerilla warriors that defeated the far more powerful armies of Antiochus IV to create an independent Jewish state in the Land of Israel, as worthy role models for the kind of principled opposition to tyranny that we all consider not merely praiseworthy but supremely so.  All of the above is true, more or less, but there’s another part of the story, a never-told part that we’ve chosen to ignore and to forget.

The Maccabees entered history, as noted, in the context of the famous Chanukah story. But history didn’t stop when the miracle had run its course and a new cruse of oil was finally prepared. Nor did the war against the Seleucid Empire end.  The fighting continued for years. Judah the Maccabee was killed in battle in the year 160 B.C.E., in fact, four years after the rededication of the Temple. He was succeeded by his brother Jonathan, but Jonathan was murdered (along with a full 1,000 of his soldiers) in 142 B.C.E. by a pretender to the Seleucid throne who had lured him to a meeting at which they were supposed to discuss an alliance. He was succeeded by another one of the brothers, Simon.  Simon did a lot of good—it was during his years of leadership that the Roman Republic formally recognized the Jewish State in 139 B.C.E.  But in the winter of 135 B.C.E., Simon and two of his three sons were murdered by his own son-in-law, an ambitious churl who hoped personally to succeed his father-in-law as national leader.

In fact, Simon was succeeded by his remaining son, John Hyrcanus, who ruled from 134 to 104 B.C.E. and who was the first (and almost only) Maccabean leader to die in bed. He was followed by his son, Judah Aristobulus, who took the title of king and who is remembered, among other things, for murdering his own mother by imprisoning her until she starved to death and for conspiring with his wife to murder his brother, whom he suspected of conspiring to murder him and seize the throne.

One thing led to another. Judah Aristobulus was succeeded on the throne by his brother, Alexander Yannai, who died in battle and was succeeded by his own wife, Queen Salome Alexandra. (Alexander Yannai is remembered, among other things, for crucifying eight hundred of his enemies in Jerusalem.) And then things got really bad. King Yannai and Queen Salome had two sons, Hyrcanus and Aristobulus. No clear heir was apparent…and so the sons went to war with each other. Eventually, the entire country erupted into a civil war as bloody as they come and became so weakened by the fighting that the door opened to the Roman general Gnaeus Pompeius Magnus, known to history as Pompey and already acclaimed as “Conqueror of Asia” by his countrymen. He came onto the scene, promised to restore order and bring peace to a war-ravaged land. The people were thrilled. And so, in 63 B.C.E., the independent Jewish kingdom established by the Maccabees a century earlier became a Roman protectorate. And so ended Jewish independence in the Land of Israel for more than two thousand years.

Who ever heard of any of these people? Historians of antiquity, obviously, know their names. But how many “regular” Jewish people, all of whom could retell the story of the Chanukah miracle easily, could move forward into the rest of the second century BCE and the first third of the first to see how the question of succession led to violence and eventual disaster?

So the question asks itself: are we the wise ones to have moved on and no longer to consider any of these live issues…or are the Muslims right to struggle with unresolved issues even after all this time? I suppose it would depend whom you ask! There is surely something to be said for moving on, for leaving the past behind, for allowing the past to morph into the present without constantly undermining it with unresolved issues from centuries (let alone millennia) ago.  And yet…it hardly sounds like a good idea to consider history a kind of burden to be set down as quickly as possible, as a prison from which escape is only possible by those who choose to leave their cells behind and move into the future unencumbered by ancient instances of friction, violence, and bloodshed. I suppose that the correct answer is neither of the above: to feel unable to step away from an ancient dispute and to risk the lives of countless civilians as it is adjudicated not in the courtroom or the study hall but on the battle field and in the street—that surely sounds like a loser’s proposition. But to have gone to the extreme that we ourselves have gone in making our own history unfamiliar even to the relatively well educated among us—that also seems like a poor plan if we wish to live in a present that authentically replicates the best of the past.

Iran and Saudi Arabia are fighting an ancient battle. There are, obviously, a thousand side issues (some of which directly concern our own country) that are fueling this particular fire. But the basic principle is that both sides of the dispute are unwilling to step away from the past to create a finer, better present. In our Jewish world, we have solved our version of that problem by making our own history a closed book to most…and that too cannot be a rational way to move forward into future if we wish the future to be a meaningful extension of the past that replicates its finest accomplishments and makes of the world we will bequeath to our children a satisfying, intelligently constructed midrash on the world our ancestors bequeathed to us. Living in history without being enslaved to it—that would be the great goal. But neither Muslims nor Jews have attained it, the former still fighting ancient battles they seem unable to step away from and the latter achieving freedom from the past by make it something they know almost nothing of.

Thursday, December 24, 2015

Church and State

I’ve never fully understood how exactly it can be constitutional for Christmas to be a federal holiday in a nation that endlessly prides itself on how carefully it guards the boundary between church and state. I come to the issue, therefore, from precisely the opposite direction from all those outraged types who write letters to the editor at this time of year to express their indignation at having received a “Happy Holidays” card from their newspaper deliverer or local school board instead of a bona fide Christmas card, or their irritation over their end-of-the-year office party being thrown to celebrate not Christmas but something fully non-specific and only vaguely festive like “the holidays” or, even more bizarrely, winter itself. I didn’t get a “holiday card” this year from the Obamas—perhaps (but even I myself don’t really think this) a subtle response to all those e-mails about the Iran deal—but I did get one from the Bidens and the Andrew Cuomos…and, true to P.C. form, neither mentions any actual holiday. (The Cuomos’ card wishes us well during the “holiday season.” The Bidens’ wishes us “many blessings in 2016.”)  It’s an unsubtle ruse. I know what they mean. They know that I know what they mean. And I know that they know it too. (The holly wreaths with red ribbons adorning the windows of what I suppose must be the Biden home—their “real” home, I think, not Number One Observatory Circle—on the cover of their card are the giveaway.) Still, I’ve calmed down over the years. I no longer find it annoying to be wished a merry Christmas by salespeople trying to be friendly and pleasant, or not too annoying. I cleverly but probably over-subtly register my pique with the whole thing by avoiding malls and post offices, even banks, in December as best I can. I suppose I can live with the White House having a Christmas tree. But I still don’t fully understand how it can be legal for the government formally and purposefully to foster the public celebration of a religion-specific festival in a nation of self-proclaimed disestablishmentarians.

Nor is the point that I simply disagree. It’s also that I’ve never been able quite to understand why Christians who take their faith seriously would even want people outside the church to glom onto their best holiday, one possessed of the kind of deep spiritual significance that can only be diluted by bringing into the mix people for whom the holiday has no religious meaning at all. Isn’t it just a bit insulting to people who take their Christian faith seriously to suggest that even non-belief in the most basic articles of that faith does not constitute sufficient reason not to celebrate its festivals? I can’t see how it could not be! And so, when I see those bumper stickers encouraging Christians to put the Christ back into Christmas, I’m in complete agreement because I too would like nothing more than for Christmas to turn back into a Christian holiday possessed of deep meaning for the faithful, something that it would be absurd, even mildly offensive, for non-Christians to embrace at all, let alone enthusiastically. Is it really all about selling toys? I suppose that is probably is!

Nor do I feel this way only about other people’s religions: I am an equal-opportunity Grinch. When I hear that the White House is having yet another Pesach seder and that the President and First Lady are both planning to attend, I feel a sense of dismay tinged with guilt: the latter because I realize I’m supposed to be thrilled that the leader of the free world is willing to make such a public display of the warmth he feels towards his Jewish co-citizens, but the former because I don’t really want non-Jews to co-opt Jewish rituals to make some sort of dramatic statement about their own liberality without actually embracing any of the ideas or concepts that undergird the rituals in question. When I read a few weeks ago about the President hosting a festive menorah-lighting ceremony at the White House, I felt the same mix of pride and ill ease. I get it—I’m supposed to be thrilled that Jewish Americans are welcome to perform Jewish rituals in the White House. But shouldn’t the most public of our nation’s buildings specifically not be the backdrop for religion-specific rituals that all Americans neither can nor should embrace? Nor do I fix my gaze in this regard only on the government: I find the endless efforts of Chabad to set up those giant, weirdly-angular menorahs in the public square equally unsettling. Surely, they’re acting out of conviction. But I can’t help thinking that every step we take towards weakening the separation of church and state—an expression, by the way, that most seem to suppose comes from the Constitution, but which was actually coined by Thomas Jefferson years later—is a step towards weakening our right to pursue our spiritual path without interference from outside parties, most definitely including the federal government. Or any government.

This year, though, my feelings about the separation of church and state are different than the past because it seems to impossible to consider these issues any longer without bringing Muslim America into the mix. Our 2.7 million Muslim co-citizens are clearly having a rough time. Article after article in the newspapers I read and at the on-line news sites I frequent are detailing almost daily how complicated a time this is for Muslims who must grapple with the fact that there are lots of people out there who are selling a version of Islam radically (to use precisely the right word) different from their own. And it seems slowly to be dawning on American Muslims that, particularly after San Bernardino, it will no longer be enough merely to insist that the jihadist version of their faith is just a perversion of Islam and thus not something “regular” Muslims need to think or worry about. (That, of course, is precisely what the Islamicist radicals behind all these terrorist strikes say about non-radical Islam! For the most recent of these articles, this one by Laurie Goodman and published in the New York Times earlier this week, click here.) But precisely when it feels like the right thing to do would be to encourage American Muslims to break formally and absolutely with the extremists in their midst by getting the President to welcome American Muslims to the White House for another Eid al-Fitr banquet like the one he hosted last June (in other words, by creating the sense that American Muslims can be part of our national fabric in the same way that Christians and Jews can be and are), that is precisely when I think we should redouble our efforts to re-erect the once unscalable wall between church and state that has slowly been eroded over the last decades.

American Muslims have a huge problem on their hand. They themselves are not such a unified group. They are slowly awakening to the fact that there are among them jihadists like the San Bernardino killers…and that the responsibility for tolerating the kind of extremism that leads to violence cannot solely be set on the shoulders of overseas clerics. My sense is that we would do well to make it clear that our secular government does not instruct its citizens what to believe or what spiritual path to follow, that the whole concept of religious freedom only works if the sole role the government plays in the internal workings of American faith communities is to play no role at all. If Muslims wish to renounce jihadism and terror, then they are going to have to stand up and be counted…on their own and in their own communities and mosques. 

Just recently, I read about something called the Muslim Reform Movement, a tiny organization headed by just fifteen Muslim leaders from the U.S., Canada, the U.K., and Denmark that has begun to take matters into their own hands to foster a version of Islam that is liberal, tolerant, and broad-minded. (To see more about the organization, click here.  To read a very interesting editorial that appeared two weeks ago in the Boston Globe about the group, click here.) I know that many of us view efforts like this with extreme skepticism. I feel that way myself. And, given the fact that there are something like 1.6 billion Muslims in the world, the influence of these thirteen brave souls will be, at least at first, severely limited. Still, the solution cannot be imposed from without: what Islam will be like in 2070, or thereabouts, when the number of Muslims in the world surpasses the number of Christians, is in the hands of today’s Muslims. Merely paying lip service to pluralism and tolerance will not be anywhere near enough. And, yes, to raise the issue that (at least for most) dare not say its name, leaving Israel out of the mix would also constitute a grave error: if Muslims are going to foster an American version of Islam that is truly pluralistic and progressive, then they are going to have to find a way to embrace the reality of Israel and the presence of the Jewish state among the nations of the world. Absent that, the whole undertaking will be, at least as far as I myself am concerned, doomed to irrelevance. If I can live with an Islamic Iran, then America’s Muslims can live with a Jewish Israel.

American Muslims do not need to be patronized by the government with special White House photo ops; they need to be left alone to chart a course forward that will affect the history of the world in a positive way by renouncing violence and terror…and embracing the core values that rest at the center of American culture, and the separation of church and state foremost among them. Many of you—both congregants and readers—have responded negatively when I’ve written or preached about this possibility in the past, expressing the notion that I am living in a fool’s paradise if I think that Islam could possibly embrace the liberal values that are the beating heart of the Western democratic enterprise. I suppose I could be. (I’m a rabbi, not a prophet!) But the Pew Research institute projects that there will be 2.8 billion Muslims in the world by mid-century…and that number makes it crucial for us in this country to support the moderates and liberals who would reform Islam. Could these people succeed? It is hard to say. Certainly, the odds are against them. But it is precisely in our country, where the wall between religion and government was meant by our founders to be iron-clad, that the kind of protestant Islam that the world so desperately needs could possibly take root and flourish. The chances of success are not good at all. But not good is better than non-existent…and so, as a new year dawns on our troubled land, I suggest we take “not good” as the best option available and see how far we can get. 

Thursday, December 17, 2015


I have generally been an admirer of Dennis Prager’s writing, and particularly of the books he jointly authored with Joseph Telushkin. Nonetheless, I found myself aghast at a piece he published the other week on the website, the on-line presence of the Los Angeles-based Jewish Journal, in which he writes acidulously about people who wish to find a dignified place in the world for transgender people. He admits readily that it must be “awful” to go through life possessed of the conviction that you are a prisoner in your own body, that your gender and sex are so out of sync that you can’t find a place for yourself in the world, that neither of the doors at the end of the hall leading to the restrooms (the one labelled “Men” and the other, “Women”) feels as though it describes you in quite the same way it appears to describe everybody else in the world, or almost everybody. But when he turns his withering gaze to people (like myself) who feel for such people and wish to find a way for them to function in society other than as outcasts and freaks (and other than by telling them they simply can’t go to school, can’t join a gym, can’t use a public restroom, can’t frequent a public swimming pool, etc.), he seems to have forgotten the pain that he himself acknowledges surely must result from feeling trapped in your own skin and writes as though gender dysphoria were just another thing someone somewhere made up to justify special treatment for some tiny group of whiners who don’t want to play by the same rules as the rest of the world.

Then, to add fuel to his fire, he turns to his readers and attempts to explain how, given the Torah’s prohibition of crossdressing, any Jewish person could possibly fall for the whole transgender scam in the first place. (The fact that transvestitism and gender dysphoria are not at all the same thing appears unknown to the author.) First, he suggests shamelessly that those who don’t share his view about transgender people must obviously also believe, and I quote, that “the Torah is essentially useless as a guide to living,” and that, whenever their own opinion differs from that put forth in Scripture, they must be the kind of spiritual egotists who simply assume that the Torah, not they themselves, must be wrong. And then, as if that line of thinking weren’t insulting enough, he offers an alternate explanation: that any who feel for transgender people must clearly have been tricked by their own sense of compassion into betraying the values of their faith and their God.

I don’t want to write here about the issues of transgendered people per se. It’s a big topic that I hope to address in more detail at a later time and it’s also true that my own thinking is evolving slowly as I learn more and read more. Instead, I’d like to discuss the concept of compassion…and particularly in light of the suggestion in Prager’s article that allowing one’s sense of compassion to justify the effort to bring one’s allegiance to Scripture into sync with one’s sense of right and wrong is a sign of spiritual depravity, or at least of one’s arrogant assumption that one is more able than God to find the boundary line between right and wrong.

Compassion, for most people, is an apple-pie value, a moral attribute so unambiguously virtuous so as to make it odd even to question its worth. The word in English suggests as much: the “com” part means “with,” and the “passion” part comes from the Greek word for “suffering.” Thus “compassion” is the quality of being able not solely to suffer in your own right, but to feel—or at least to feel sensitive to—the misery also of others. Compassionate people, therefore, live at the intersection of sympathy and empathy, always trying to be guided not only by the way they themselves feel, but also—and perhaps even more so—by the way they imagine other people feel. The compassionate individual, therefore, is someone who understands that kindness does not imply moral weakness, let alone depravity, but moral strength: it is the quality of seeing the world through another’s eyes and acting accordingly not because one is too stupid to have an opinion of one’s own, but because one has enough respect for others also to respect their opinions and the specific way they view and interpret the world.

To apply the concept to the transgender people of this world is not to be weak-willed or foolish, let alone immoral. It is merely to look at someone suffering in the world and, instead of mocking that person for having to struggle with issues with which you yourself have been spared from having to grapple, finding it in your heart to wish for that person to find a path forward in life that does not involve endless degradation or self-denial. Prager’s coarse prediction that treating transgendered people with compassion will lead directly to schools being required, eventually by law, to allow young people with boys’ bodies to parade around naked in the girls’ locker room seems beyond exaggerated: the idea that embracing compassion will lead directly to public vulgarity seems to me to posit a bizarrely narrow sense of how people suffering from gender dysphoria—and particularly young people—could be helped without that help impinging on the natural rights of all people to feel secure and safe in public washrooms and in their swimming pools’ changing areas and in the locker rooms at their gyms. (The locker room issue is real, to be sure: click here and here. My point is that there’s something inherently bogus about supposition that the only choices are to tolerate inappropriate, unsettling behavior or to treat transgender young people harshly and without compassion. Surely, a nation as clever as our own can come up with a solution that leaves the dignity of all parties intact!)

And then Prager goes on to give another example of misplaced compassion leading its adherents down the road to perdition: race-based affirmative action. Affirmative action is the kind of complicated concept, the constitutionality of which the Supreme Court itself is currently attempting to unravel. Nor is it obvious, constitutional or not, how effective a tool it actually is. The latest argument against, usually referenced with the word “mismatch,” implies that giving students drawn from underrepresented minorities places in colleges to which they might not otherwise be admitted is actually a disservice to them, since they cannot possibly compete with the “regular” students who got into those schools in the normal way and without any extra help. This, in Prager’s opinion, constitutes yet another example of how compassion can lead past “just” political correctness to actual harm.

Justice Roberts makes sense to me when he notes that “the way to stop discrimination on the basis of race is to stop discriminating on the basis of race.” And there are surely many real reasons to consider the whole “mismatch” issue seriously. Just lately I’ve read two pieces on the topic on the website of the Washington Post which impressed me and which I recommend to my readers. In one of them, Richard H. Sander, a professor of law at the U.C.L.A. School of Law, argues persuasively that the best interests of minority students are not served by helping them into schools in which they are unlikely to succeed. In the other, by Richard Rothstein, senior fellow at the Chief Justice Earl Warren Institute on Law and Social Policy at the University of California [Berkeley] School of Law, the author makes an equally persuasive argument that the whole “mismatch” issue is exaggerated and that the only practical way to deal with the hurdles young black students face as they make their way forward in the world is to make sure that they are not denied educational opportunities that by all rights should be theirs. Both make excellent arguments, but, regardless of which view eventually prevails, the underrepresentation of certain racial, ethnic, and social-class-based minorities in our best universities is a real issue to be pondered by Americans devoted to equality and, yes, possessed of a spirit of compassion for the disadvantaged. (To see Sander’s piece, click here. To see Rothstein’s essay, which originally appeared in American Prospect, click here.) In any event, both authors clearly have the same larger goal in mind: the creation of a color-blind society in which race neither enhances anyone’s chances for success nor detracts from it. Nor would either argue, I suspect, that there is anything base about feeling compelled by one’s sense of compassion for the underprivileged to work for a more just society. To determine how best actually to help is a different issue. But to argue that compassion itself is the problem is, at best, a perverse argument to make at all, and particularly for someone as steeped in Jewish tradition as Dennis Prager.

Prager’s essay ends with a rhetorical question: “If the Torah is not our guide, who or what will be?” It sounds like such a simple choice when put that way: either we embrace the Torah and allows it to guide us forward, or else we discard it and choose a different book or individual as the font of wisdom from which we drink and as our moral guide through life. But that is more of a fool’s choice than a serious one. We hold fast to the Torah as our tree of life and we endlessly study its intricacies and riddles. But for all our endless lip-service to notion of the Torah as God-given Scripture suffused with its Author’s divine spirit, we are not biblical Jews whose sole allegiance is to the simple meaning of Scripture and neither have Jews ever believed that it could be possible to be faithful to God’s law while behaving immorally at the same time. To look at someone who is riven with conflict about his or her gender-identity and not to respond with kindness, with compassion, and with a willingness to work to find a way for such people to know the kind of inner peace that comes naturally to people not afflicted with gender dysphoria—that would be to turn our back on the lessons the Torah teaches to see the divine image in all humankind…and to bring only compassion and kindness to bear in evaluating the downcast and the marginalized in society. And it would also be to ignore the fact that being compassionate is specifically listed in Scripture as one of the thirteen attributes of God, a virtue therefore to be cherished and embraced by all who would walk in God’s ways.

Thursday, December 10, 2015

Chanukah 2015

As I was reading the paper the other day, I unexpectedly came across an article about some parallel scientific studies being undertaken in Denmark and in our country. At first, they sounded like the kind of detailed, complicated studies which only scientists could love…or even understand. But then, upon further reflection, I found myself drawn to them and wishing to learn more. And then, entirely unexpectedly, my thoughts turned to one of the riddles of Chanukah…and I found a plausible answer sitting right before my eyes.

When I was in high school, the concept of genetic heritage was presented to us as a kind of code embedded in our cells that we are able to pass along to our offspring if and when we manage to reproduce. As opposed to, say, citizenship, which can be passed along from parents to children but which has no physical aspect to its existence, we were taught to think of our genetic heritage as something fully real in the physical sense (because genes, teensy-weensy though they may be, exist as actual, physical things) and thus not that different from money or property or any other part of a parent’s estate that a child might acquire as a gift from a still-living parent. 

How it all worked was a bit mysterious, surely more than slightly arbitrary. Unless they are identical twins, for example, siblings receive different sets of these gifts from their same two parents. This accounts for the differences between them and was explained to us with reference to the fact that children have two parents, not one, and that the various parts of those parents’ genetic heritage combine in different ways on different conceptive occasions to create different genetic gifts to a couple’s different children. But our genetic heritage was presented to us not only as arbitrary, but also as immutable: you can do what you can to resist the siren call of your genes but they constitute a gift—generally some combination of blessing and burden—that cannot be altered, only inherited and gratefully accepted, actively resisted or passively given in to. I didn’t really understand the whole thing then and I’m sure I don’t fully understand it now. But one thing that was completely clear, even to my tenth-grade self, was that genetics is unalterable destiny, something to be pleased about or struggled against but about which you can’t do a damn thing! Nor, needless to say, can you control the contents of your own future genetic gift to whatever offspring you may eventually produce.

Apparently, I was wrong. In 2010, several professors at the University of Copenhagen found that they could alter the sperm of male rats not by addressing their genetic make-up at all but rather by subjecting them to different sets of experiences. One set of rats, for example, was made obese by being fed very high-fat foods. This was a post-birth phenomenon, obviously. So, at least theoretically, the rats—none of whom was predisposed to obesity—should not have had a higher percentage of obese offspring than rats that were fed a normal diet. But they did. And so began a long, complex set of experiments intended to determine if the genetic heritage bequeathed to offspring can be altered by experience.  In 2013, a group of scientists led by Adelheid Soubry, a molecular epidemiologist at the Duke University Medical Center in North Carolina, attempted to perform a similar experiment on human subjects and concluded that experience can indeed alter a man’s sperm in a way that affects the genes a man bequeaths to his offspring. And now the Danes have published a study in a very respected journal, Cell Metabolism, that supports that conclusion. (The science is complicated and I won’t attempt to review it here. It has to do with the way sperm is or isn’t altered by experience to bring certain features of that man’s genetic heritage to the fore. The genes themselves are not supposed to self-alter through the experience of experience. But if the specific way they configure in the context of reproduction can be weighted differently by some specific experience that the man in question has had, then it more or less comes to the same thing. Or at least it does from the vantage point of the embryo that inherits that man’s DNA configured differently than it might otherwise have been.)

Others are less sure about how meaningful the results really are. Many of the arguments against accepting the results of these studies are very complex but, to the extent I was able to follow them, also very interesting. To learn more about these studies, both for and against, click here to read the article by Carl Zimmer mentioned above that was published in the New York Times last week. To read a précis of the Cell Metabolism article (not recommended for people who last encountered the study of biology in tenth grade), click here

I’m hardly in a position to offer an opinion about the worth of the research, but I find it fascinating nonetheless…and not solely because of its implications for our understanding of the human reproductive process. What I find fascinating is the possibility that the role of experience might be no less meaningful on the national level as a people moves forward through history and bequeaths its national culture to new generation after new generation.

There’s no question that Judaism itself—as well as its much maligned stepsister, Jewishness—has developed over the millennia. Every student of the Bible can see how different modern Jewish religion is from the faith depicted in the pages of Scripture. But Judaism today isn’t only different from the Israelites’ religion in biblical times. It is also dramatically different from the Judaism described in the Talmud and even, in profound and meaningful ways, from the Judaism of medieval times. That religions develop over dozens of generations is hardly a great discovery. But what makes religions develop in the specific ways they do develop? What makes some innovations successful and others wholly unsuccessful? Why does an entire people barely pause to notice when whole bodies of scriptural law are summarily dropped—I’m thinking, for example, of the elaborate laws that the Torah sets forth governing inheritance, laws more or less universally ignored today including in the most pious circles—while other practices dating back only three or four centuries have not only established themselves as authentic Jewish rituals but are universally observed in every synagogue community? Are these developments entirely arbitrary? Or is it possible that experience shapes the genetic code—or whatever you’d call it on the national level—that passes silently and subtly from generation to generation? In other words, we are used to thinking of history as the result of Jewishness—what happened to us being a function of who we are—but what if the reverse were true (or also true) and history were rationally to be understood as the set of nation-wide experiences that rests invisibly at the generative core of Jewish life not unlike the way the sperm itself that conveys a man’s genetic heritage to his newly conceived child vanishes into the embryo and is never heard from again other than by manifesting itself in the nature and culture of the man that embryo eventually grows to become?

It would be interesting to think of Chanukah in that vein. It’s not a biblical holiday. Lots of other events of arguably equal importance historically failed to turn into holidays. (One of the few books from outside the standard rabbinic literary corpus to survive intact from the early rabbinic period, Megillat Taanit, is basically a detailed list of thirty-five such politically and historically important days.) Chanukah should have been in that category—a week of days on an ancient list during which eulogizing and fasting was forbidden because of some positive historical event that once happened. But somehow that isn’t what happened.  The experiences of exile and restoration, of being assaulted by a hostile culture and having to find a way to preserve our national cultural heritage despite the pressure to adopt what is touted to us as “world” culture (and thus by definition something superior to our rinky-dink set of beliefs, customs, stories, and ceremonies), the experience of finding the courage to stand up to the world and refuse to vanish merely because a set of self-appointed pundits can’t understand why we wouldn’t want to be a modern nation according to their definition of the term…that set of experiences related to the nation growing up spiritually, nationally, militarily, economically, and, if one can say such a thing about nations, emotionally…that was something that shaped our national DNA permanently and left us different than we otherwise might have been.

That a man’s experiences in life can alter the destiny of his children by affecting his sperm in specific ways is a tantalizing notion. Whether it’s true, who knows? But that the same could be true of national cultures—that they are not so much the source of national experience as they are the product of those experiences’ effect on the transmission of that culture to subsequent generations—that theory strikes me as truly tantalizing. It could go a long way to explaining why Chanukah, which shouldn’t really have been a festival in the first place and which certainly doesn’t feel like it merits the major place on the Jewish calendar it now occupies, has taken such a prominent place in our festal calendar. The rabbis of ancient times had no difficulty permitting the blessing recited while the Chanukah candles are lit to refer to God as having commanded us to kindle them. That that commandment appears nowhere in the Torah, which fact the rabbis surely knew perfectly well, makes perfect sense—Judah Maccabee lived a full millennium after Moses. But perhaps the rabbis were onto something nevertheless. Could it be that God ultimately sanctifies the House of Israel specifically by allowing this concept of experience-altered reality to guide the nation’s religious practices? Could the plan all along have possibly been that, no matter how far afield of Scripture Jews allow their faith to develop, they will always feel themselves under God’s watchful protection and truly to be sanctified by God’s commandment to act in harmony with their own historical experience and its exigencies? That is the thought I offer you to ponder as Chanukah draws to a close and we move on to less festive weeks and, presumably, the eventual arrival of “real” winter.