Thursday, December 20, 2018

At Year's End


I’ll be away next week, so this will be my last letter of 2018 to all of you, my faithful readers for all these many years. (For the record, this is my 442nd letter since I began writing them in 2007.  So those of you who have been reading along for all these years probably know me better than I sometimes think I know myself.) As regular readers know, I often like to riff off of recently published essays elsewhere. And so, for this final letter of a soon-to-be-gone secular year, I’d like to respond to an essay that appeared in the Sunday Review section of last Sunday’s New York Times. The piece, written by a Laura Turner (whom the Times vaguely identifies solely as “a writer”), was basically written to set forth the triple thesis that, to quote the author, you can’t go to church on your phone, internet church is not real church, and you cannot be part of an actual community by putting yourself in the company solely of virtual people.
It’s not that hard to see how we’ve come to a point at which such an essay sounds like a measured, thoughtful response to a phenomenon of modern life and not like the premise of a wacky science fiction novel.

I myself am a good example. I hardly ever go to the post office, because I conduct more or less all my correspondence by email. I rarely go to the bank because I pay all my bills electronically and deposit any actual paper checks I receive through my phone. (I can, however, also remember when that last part really would have sounded like science fiction.) Even though there are any number of large department stores within a fifteen minute drive of my home, I buy most of what I purchase online. All of these would once have been opportunities for interaction with other human beings, with neighbors and strangers, with people I might over years have come to think of as friends…or at least as familiar faces. There was some degree of pleasure in those encounters, even in the fleeting ones, but each has now been replaced by a streamlined process that permits me to accomplish wholly on my own what would once have required at least some actual contact with other people. Efficiency is a good thing, obviously. But there is a point of diminishing returns as well: it’s a lonely world we’ve constructed for ourselves, this ultra-efficient digital age in which it feels necessary to justify taking the time to do even something as inconsequential as buying a pair of socks when you can accomplish the exact same thing—and probably for less money—without even standing up from your computer, let alone getting in your car and driving somewhere.
So Laura Turner’s essay is addressing what really is just the next step in an already ongoing progression of societal innovations that purport to improve life by making some specific task doable without lifting your hands from the keyboard: the invention of the virtual congregation able to provide the succor and support of affiliation without imposing the tedious necessity on the would-be congregant of actually leaving home and sallying forth into the world, much less having to encounter actual people on a spiritual journey that, at least in a certain rarified sense, must be taken alone anyway.

But there’s alone all by yourself and there’s alone in the company of likeminded others. And Jewish legal tradition—which certainly acknowledges the intensely personal nature of prayer—is also very clear about the importance of communal prayer being undertaken within the bosom of an actual community. Indeed, our traditional texts go so far as to discuss—and entirely seriously—if a minyan of ten can be duly constituted of nine people well inside a room and a tenth merely standing in the doorway, or of nine people in a room and a tenth standing outside the building but looking in through an open window. As usual in cases like this, the exceptions prove the rule: our classical texts permit the three needed to recite the formal introduction to the Grace after Meals merely to be within eyeshot of each other and specifically not to have to be in the same room, but that is because, in the end, there is not imagined to be any enhancement for the individual in the spiritual experience of being part of the requisite three (called a zimmun). And, indeed, when we move up from the basic introduction liturgy to the slightly expanded version solely recited in the presence of ten, however, then mere visual contact is specifically deemed not enough and the ten have actually to be in the same room, in the same space.
There is some rational wiggle-room provided in the sources. Once the quorum is achieved, then even individuals who may not be counted in the quorum because they are not physically present may be permitted to respond when the liturgy requires a response. (For a detailed exposition of the whole issue—and the full range of halakhic options and possibilities—written by my colleague and my friend of forty-plus years, Rabbi Avram Israel Reisner of Baltimore, click here.) And there is a whole interesting Pesach-related dimension to the discussion because the Torah’s original framework for requiring that worship take place in a single “place” has to do, not with prayer at all it turns out, but with the ingestion of the paschal lamb on the eve of Passover.

That Pesach angle is more pertinent than it might at first seem in that the whole concept seems to be rooted in the interesting notion that, because the festival exists specifically to commemorate the Israelites’ passage from slavery to freedom, the formal ingestion of the paschal sacrifice can only take place in the context of community because true freedom can only be attained in the context of community, of people occupying the same space, breathing the same air, making room for each other in their personal ambits and responding not to each other’s voice or image but to each other’s actual presence. If a tree falls in a forest where there is no one to hear it crash to the earth, there is no real sound produced. Whether that’s really true, who knows? (My physics teacher in eleventh grade said it was and she seemed pretty sure she was right.) But what’s indubitably true is the corollary idea relating to community: if we grow freely to the fullest flower of our personhood without the presence of others to note our growth, to evaluate it, and to respond to it, there is no real growth at all…only the self-generated illusion of spiritual, emotional, or intellectual progress.
But that notion—that true personal freedom does not derive from the unfettered ability to act as one pleases but rather in the cohesive sense of belonging that comes from participation in the group, in a minyan, in a community of supportive friends, in society itself—that notion feels slightly out of step with our American ideal of the self-reliant individual who stands on his or her own two feet, who specifically does not need input from others to know where he or she stands, and whose rugged individualism is the human face of the Constitution’s ideal of the natural right of people to exist according to their own lights and without reference to the opinions or prejudices of others. Ralph Waldo Emerson, the greatest of all American essayists and nineteenth century thinkers, wrote about this stirringly and very convincingly in his essay, “Self-Reliance,” which I have read many, many times over the years. And so I, filled with the greatest respect for Emerson and his philosophy of life, could easily find myself on the horns of a serious dilemma, part of me standing with Emerson and seeing community as, at best, the platform upon which individuals stands as they grow towards freedom, and the other part of me understanding community not as the platform on which free individuals stand but as the context that grants them their freedom. But that dilemma does not really exist.

For millennia, the opposite of freedom was slavery. That was Emerson’s world, and his essay reflects that fact. (“Self-Reliance” was written in 1841.) But that was then and today the opposite of freedom for most moderns is loneliness, not chattel slavery. And that is why Laura Turner’s op-ed piece was entirely correct: because no one, not even the super-digitized hipster who wouldn’t dream of setting foot in a bank or a post office, becomes less lonely other than by being in the presence of other flesh-and-blood people. Looking at images on a screen cannot provide the succor of presence any more than looking at pictures of food can provide nourishment. And there is no greater barrier to spiritual progress than the sense of aloneness that leads to alienation from society, disaffection with one’s neighbors, and creeping estrangement from God. I think I know enough of Emerson to risk saying that I think he would agree wholeheartedly if he knew enough of our world to form an opinion.
As 2018 draws to a close, I think of the way my community comes together when someone suffers a loss and how remarkable the shiva experience is precisely because of the remarkable level of inner-communal support we are able to provide because so many are so eager to help. I think of my own brief hospitalization a few weeks ago and how many people—more than I could keep track of—texted me and left me emails and voice mails, each trying to provide from afar some of that sense of communal support that each understood to be requisite to the healing process. And I think of what it means to so many of us to be growing older in the context of a rich, caring community of friends ready to reach out and shore us up when we start to list to one side or the other, or when we suddenly feel ourselves to be on shaky ground, or when we momentarily lose our self-confidence and fall prey to the fears we mostly manage to keep at bay. It’s a blessing, being part of any community…but my special blessing is to be part of ours. The Marlboro man may exemplify the individualist who can ride off on his own into the next chapter of his life because he serves as his own universe of discourse, as his own arbiter of taste and style, as his own judge and his own jury. But for me, life is with people…and it is involvement with others that sets me free, that makes me free…and that enables me to live free in a world of people trying to go to church on their phones and uncertain why it never feels quite right.

Thursday, December 13, 2018

The DNA of Shared Experience


The modern science of genetics has made it clear how parents can pass aspects of their own specific heritage along to their offspring. And, indeed, the reason no one finds it at all startling to assert that specific physical traits can be transmitted from generation to generation is precisely because we can see easily enough how a child’s hair color or eye color often matches one or both of his or her parents. From there, it’s not that much of a leap to considering non-physical attributes—say, a predisposition to excel at athletics or as a musician—in that same vein. And, indeed, we also all know instances of children appearing naturally to be good at some skill at which one of their parents excels (or at which both do). But can that notion be extended to include specific experiences parents may have had as well? At least at first blush that sounds like a stretch: the notion that something can happen to me and that that experience can somehow end up encoded in my DNA if it only crosses some theoretic line of genetic responsivity—that feels hard to imagine. But learning about the science of epigenetics has altered my thinking in that regard, and altered it powerfully. What I’ve learned in that regard is what I want to write about this week. And also about its implications for my understanding of the nature of Jewishness itself.
I was prompted to start taking the possibility of the transmissibility of experience seriously by a study published in the Proceedings of the National Academy of Sciences just this fall. Written by Dora L. Costa, a professor of economics at UCLA, and by Noelle Yetter and Heather DeSommer of the National Bureau of Economic Research in Cambridge, Mass., the study focused on data from Civil War days and concluded that the sons of Union Army soldiers who suffered severe trauma in the course of their time as prisoners-of-war in Confederate prison camps were significantly more likely to die without reaching old age than the sons of Union soldiers who were not captured or incarcerated by Confederate forces. Since all the sons in the study were born after the end of the war, the study suggests that they must have—or at least could have—somehow inherited their fathers’ traumata and suffered from their aftereffects. There were even subtle gradations of experience to consider: the sons of men who were imprisoned in 1863 and 1864, when conditions for prisoners of the Confederacy were especially brutal and inhumane, seem to have been even more likely to die as young people than the sons of Union soldiers taken captive earlier on in the war. (To see the original study, click here. For an excellent analysis of the study published in The Atlantic last October that most readers will find far more accessible, click here.)

The phenomenon has been demonstrated to exist in the animal kingdom as well. A few years ago, for example, scientists were able to demonstrate that when mice who were trained to associate the smell of cherry blossoms with the pain of electric shocks were bred to produce offspring, both the next generation and the generation after that responded anxiously and fearfully to the smell of cherry blossoms in a way wholly unlike mice whose parents or grandparents hadn’t been trained to associate that scent with that level of pain. The study, written by Emory University School of Medicine professors Brian G. Dias and Kerry J. Ressler and published in the journal Nature Neuroscience, concluded that the mice had somehow inherited a response built into their parents’ history of experience. (For a very interesting account of this experiment published in the Washington Post a few years ago, click here.)
And then there came the study of the Dutch famine victims. In the winter of 1944-1945, to punish the Dutch for having attempted to assist the Allied advance into Europe by shutting down the railway links that were being used by the Germans to bring troops to the front line, the Nazis blocked food supplies from coming into Holland so severely that more than 20,000 people died of starvation by the time the war ended the following spring. (For more information, the best source is Henri A. van der Zee’s The Hunger Winter: Occupied Holland 1944-1945, published in 1998 by the University of Nebraska Press.) This would just be one more horrific story of German savagery during the war, but it led to some interesting scientific studies, one of which was published jointly by seven scholars led by Professor L.H. Lumey of Columbia University in the International Journal of Epidemiology in 2007 and which appeared conclusively to prove that the wartime experience of famine impacted not only on the poor souls who had to live through that dreadful winter in the Netherlands, but on the children born to them after the fact: as a group, the children of people who lived through the famine and survived to become parents later on experienced higher rates of obesity, diabetes, and schizophrenia than their co-citizens. They were also noticeably heavier than Dutch people born to parents who did not live through the Hunger Winter. And they died younger than other Dutch people did on the average—the study found that people born to Hunger Winter parents were still experiencing a full 10% higher rate of mortality even sixty-eight years after the famine ended. How exactly this all works, or might work, is beyond me. (Click here for the study itself and here for a far more easily understandable summary of its results published in the New York Times last winter.) But what seems easy to seize is the basic principle that the severe trauma suffered by people living in occupied Holland while the Germans were actively trying to starve the civilian population into submission was so severe that children born to those people ended up with the experience somehow encoded in their own DNA even though they themselves did not experience the famine at all.

This accords well with an essay by Olga Khazan published last summer in The Atlantic in which the author was able very convincingly to demonstrate that victims of intense racial discrimination seem as a class to experience a process called methylation on the specific genes that are connected with bipolar disease, asthma, and schizophrenia, and that this specific genetic change too can be passed along to subsequent generations. What methylation is exactly is also beyond me. The simplest explanation I could find was on the www.news-medical.net website (click here), but was still far too sophisticated for a mere liberal arts major like myself to fathom. The basic principle, though, is clear enough: the experience of intense discrimination can apparently imprint itself on your DNA in a way that makes it possible for children born to you even long after the fact to have to deal with traumatic experiences they didn’t personally experience because those experiences have somehow ended up encoded in their DNA.
And that brings me to the Jewish angle in all of this.  Just two years ago, Dr. Rachel Yehuda of Mount Sinai Hospital here in New York discovered evidence of this methylation process affecting the gene associated with stress not only in the DNA of Shoah survivors, but in the DNA of their descendants as well.  The study, published in Biological Psychiatry in 2016 and not meant for any but specialist readers, was not universally praised—but mostly because only survivors and their children were included in the study, but not the survivors’ grandchildren or great-grandchildren. (For an example of a hostile response published in the U.K. in 2015, click here.) But those scientists are basing themselves, as scientists surely should, on the specific way Dr. Yehuda used the empirical data that was available to her. I, on the other hand, not being encumbered by an actual background in science, find her work wholly convincing and more than easy to believe. Indeed, I have spent my whole life wondering where Jews like myself, who can reasonably say that not a single day has passed since adolescence in the course of which some thought or image related to the Shoah has not surfaced invited or uninvited in my consciousness, from whence this obsessive involvement in the Holocaust derives. Both my parents were born in this country. Therefore, neither was personally a survivor. But both were adults when the war ended and the details regarding the camps and the mass executions became known; the trauma the Jewish community experienced over the months that it took for the true story to become known—and the compounded trauma of slowly coming to terms with the degree to which the Allied forces chose consciously not to interfere with the daily transfer of Jews to the death camps—that trauma, it now seems to me, is at the core of my own worldview, of my own sense of who I am and what the world is about.

Nor is this just about the Shoah in my mind. Ancestry.com recently updated my DNA profile and declared me genetically to be 100% Ashkenazic Jewish. (I had previously been hovering between 96% and 97%). Am I also carrying around the epigenetic markers associated with the First Crusade and the butchery and devastation the Crusaders brought to the defenseless Jews of the Rhineland? I suppose that will only seem an obscure question to readers who don’t know me personally. 

Thursday, December 6, 2018

Chanukah 2018

For most North Americans, Chanukah is a sort of “us vs. them” affair: the foe wanted to obliterate us (or, depending on who’s telling the story, our faith or our culture or our way of worship) but the Jews of that time were unexpectedly, even perhaps miraculously, able to resist the enemy’s dastardly plans and to chase the minions of the evil king back to wherever it was they came from before they could bring their despicable plan to fruition. Doesn’t that sound about right?

Like all (or at least most) oversimplifications, this one is not entirely incorrect. There really was a King Antiochus on the throne of the Seleucid Empire—the Greek-speaking kingdom with its capital at Antioch in today’s Syria that ruled over the Land of Israel in the second century BCE—and he did promote the eradication of traditional Jewish norms of worship even in as sacred a space as the Jerusalem Temple to make them more universal and less ethnically distinct. There was every reason to expect the ragtag group of guerilla warriors who gathered around the Maccabees—who seem to have come out of nowhere to do battle with Antiochus’s legions—there really was every reason to expect them to go down to defeat, yet they were successful and managed against all odds to expel the king’s armies from what was in those days, after all, a province of his own empire and—even more unimaginably—to wrest some version of autonomy from the central government and thus to install a kind of self-rule that lasted for almost a century. And if the darker part of the story—the one we generally ignore featuring large numbers of Jewish people more than eager to make Jewish ways less particularistic and more in step with the great cultural tide of the day (called Hellenism, literally “Greekishism,” because of its origins in the culture of classical Greece) and very happy to have the king’s support in their effort to reform the Jerusalem cult and make it more appealing to themselves and to outsiders looking in—if that part is generally ignored, that’s probably all for the best. Who wants an ambiguous yontif anyway? Much better to stick with the Hebrew School version and not to stir the pot unnecessarily! We don’t have enough to deal with as it is?
This week, therefore, I would like not to talk about the well-known part of the Chanukah story and its key players at all. (Shelter Rockers will hear me speak about that part of things in shul on Shabbat anyway.) Instead, I’d like to start the story in media res and begin to say why Chanukah really does still matter by introducing a personality that almost no readers will ever have heard of, one Judah Aristobulus.

And here he is, at least as Guillaume Rouillé, the inventor of the paperback, imagined him in sixteenth-century Lyons. But who was he really? And why do I want to start my peculiar, start-in-the-middle version of the Chanukah story with him of all people?
Everybody has heard of Judah the Maccabee and most know that he had several brothers as well as a famous father. But what exactly happened to them all—that is the part no one knows. And more’s the shame, that—because the most profound part of the story is precisely its least-well-known part.

Jerusalem was taken in the year 164 BCE, but the fighting continued for years and, indeed, Judah himself died in battle in 160 and was replaced as commander-in-chief of the Jewish army by his brother Jonathan, who at the time was already serving as High Priest. Jonathan was as much a politician as a general or a priest, however…and he made a fair number of enemies by attempting to transform an autonomous Judah within the larger Seleucid empire into a truly independent state by signing treaties with any number of foreign countries. He lasted for almost two decades, but was finally assassinated by someone who apparently found his politics intolerable and was succeeded by his brother Simon, the last of the original Maccabee brothers.  The inner politics of the day is interesting enough, but what fascinates me in particular is the way that the Maccabees, who started out only wishing to prevent the Seleucid emperor from disrupting traditional Jewish life, became more and more intoxicated with the power they saw themselves able to seize. Judah was a kind of a general. Jonathan was a general and High Priest.  And Simon convened a national synod that formally recognized him as Commander-in-Chief, High Priest, and National Leader. Most important of all, he negotiated a treaty with the Roman Senate that cut the Seleucids out of the action entirely and acknowledged solely the Maccabees as the legitimate rulers of their land.
The story only gets bloodier. Simon was murdered in 134 BCE by his son-in-law, a fellow named Ptolemy, and thus became the first Maccabee to be succeeded not by a brother but by his own son, a man known to history as John Hyrcanus. In his day, the war with the Seleucids flared up again. The details are very confusing, but the basic story is simply that the Seleucids took back all of Israel except for Jerusalem itself, then abandoned it all when Antiochus VII died in 129. Indeed, as the Seleucid empire slowly fell apart, John Hyrcanus embarked on a military campaign to seize what he could of the adjacent world. And he was successful too, conquering a dizzying number of neighboring states, in the course of at least one campaign, the one against the Idumeans (the latter-day Edomites), he forced an entire nation to convert to Judaism. Most important of all, he cemented the nation’s relationship with Rome, agreeing to work only in the best interests of the Roman Republic in exchange for their agreement to recognize Judah as a fully independent state. He established relations with Egypt and Athens too, thus making Judah into a real player on the international scene. And then he died in 104 BCE, one of the very few Maccabees to die of natural causes.

His eldest son was Judah Aristobulus. The original plan was for Judah to become high priest and for his mother to become the political leader of the nation. Judah Aristobulus (also sometimes called Aristobulus I) found that irritating, however, so he imprisoned his mother and allowed her to starve to death in jail. Then, for good measure, he also imprisoned all his own siblings but one. (He had that one killed eventually too.) And it was this Judah Aristobulus who, not content with just being High Priest, commander-in-chief, and political leader, also named himself king.
It didn’t last. He himself didn’t last—he was sickly to start with and then, after one single year on the throne of Israel, he too died and was replaced by his oldest brother, known to the Jews as King Yannai and to the rest of the world as Alexander Jannaeus.

It’s easy to get confused by the details. I’ve read the part of Josephus’s Antiquities of the Jews that covers the Maccabean years—the only sustained, detailed narrative covering the entire period—a dozen times. It couldn’t be easier to get lost in the forest amidst so many different trees—and the fact that there are so many different people with the same names only makes it more confusing. But when you step back and look at the larger picture, you see something remarkable…and deeply relevant to our modern world.
The Maccabees—known to history more regularly as the Hasmoneans—started out as highly and finely motivated as possible. They had an emperor ruling over them who held their national culture in disdain, so the Maccabees rose up and somehow won a measure of autonomy for their people that most definitely included the right to run their own cult and to pursue their own spiritual agenda. But the power they won on the battlefield corrupted them from within, leading them not only not to act in the nation’s best interests but to cross a truly sacred line when Judah Aristobulus finally broke with the very religious tradition his family came to prominence to protect by declaring himself king.

He wasn’t from the tribe of Judah. (The Maccabees were priests, so of the tribe of Levi.) He wasn’t descended from David. He had no legitimate or even illegitimate claim to the throne. But he took it anyway…and that act of self-aggrandizing sacrilege set the stage within just a few short decades for a massively blood civil war undertaken by two of his nephews who were vying for the crown, which disaster opened the door to the Romans who saw in it an opportunity to occupy Judah and make it part of their empire, which they did in 63 BCE. The next time Jews managed to declare in independent Jewish state in the Land of Israel was in 1948 CE, a cool 2011 years later.
It is never a good thing when a nation’s leaders see in public service not a way to contribute to the welfare of the nation but an avenue for self-aggrandizement, self-enrichment, and self-promotion. The Maccabean descendants became wealthy and powerful. They hobnobbed with the delegates from the world’s most important nations, including the world’s sole super-power at the time, the Roman Empire. They reduced even something as innately sacred as the office of High Priest to a mere stepping stone capable of leading to still greater authority. As they became more and more entangled in their own inner-familial struggles, they relied increasingly on generals who themselves had a wide variety of personal agendas to pursue. And then they crossed the line and, in an act of spiritual madness, made themselves the kings of Israel despite the fact that they had no justifiable claim to the crown.

Public service is a burden and a privilege. Our greatest political leaders have always been people who saw that clearly and who allowed themselves to be saddled with the millstone of public office out of a sense of personal honor and deep patriotism. We have had American leaders that like—Abraham Lincoln, I believe, was such a man—and our nation is the richer and better for their service. But the larger story of Chanukah—the one we never tell in Hebrew School—has its own deeply monitory lesson to teach: that greatness in governing is a function always of personal character…and never one of mere opportunity. 

Friday, November 30, 2018

Rock of Ages


Any of my readers who hear me preach at Shelter Rock regularly know that one of the themes I return to over and over is the mystery of things, of stuff. Three hundred years ago, in 1718, the world was a different place in many ways. New Orleans had just been founded in what was then called New France. Spain, a naval superpower, was at war simultaneously with the Holy Roman Empire, the U.K., France, and Holland. The potato had just been brought to the New World, more specifically to New England, where it not only flourished but quickly became a staple of the North American diet. Coffee beans were new too, brought in that year for the first time to the New World (more specifically, to Surinam in South America) as well. So, at least at year’s beginning: no French roast, no French fries, and no French Quarter! But the world was full of people—more than 600 million of them by most estimates. (Some were French, obviously.) And all of them had stuff.
Jewish people also had stuff in 1718, and a lot of it. There were something like 7 million Jews in the world as the eighteenth century dawned. All the married couples had k’tubbot. All the wives had wedding bands. All the men—or surely lots of them—owned t’fillin. Each synagogue had an aron kodesh filled with Torah scrolls. Every community owned a m’gillah from which it read on Purim. In each home, or surely in most, was a seder plate. Each community—and with no exceptions at all—kept a record of births and deaths, a list of who was buried in which grave in the cemetery they maintained, a ledger of contributions solicited, received, and acknowledged, and a record of circumcisions and marriages.

Some of this stuff survived. There are, for example, many Jewish communal record books in the manuscript and rare book libraries of the world. And some of it simply was not built to last and, in fact, did not survive into the modern day. But what about the rest of everything? That’s the question that continues to fascinate. It’s unimaginable—and truly so—that anyone would throw out his or her parents’ k’tubbah after their deaths.  But even less conceivable is the image of anyone not wishing to keep a beloved mother’s wedding band or the veil she wore at her wedding or her own mother’s ring or veil. But if that is the case—which, speaking realistically, it surely must be—then where is all that stuff?
I’m a good example. I know where my mother’s wedding band is. (It is on the ring finger of Joan’s left hand.) But where is my parents’ k’tubbah? And where are their own parents’ k’tubbot? And where are my grandmothers’ wedding bands? I had two grandmothers when I was born, one of whom, my father’s mother, died when I was only four years old. But my other grandma, my mother’s mother, I remember well…and I remember her wearing a wedding band, a plain gold band that she wore for decades after my grandfather died. (She gave me my first piano lessons, and I can still see her hands at the keyboard.) Was she buried wearing it? I suppose she might have been, although generally speaking that isn’t our custom. But if she’s not wearing it, then who is?  I can’t see my mother or her sister, my grandmother’s only two children, selling their mother’s wedding band for a few dollars! Nor did either of them wear it, not my aunt and definitely not my mother. So where exactly is it?  That’s the question that occupies me.

Einstein famously once wrote that the distinction between the past, present, and future is “just a stubbornly persistent illusion.” I’m sure I don’t fully understand what he meant—the difference between the past and the future feels pretty non-illusory to me—but whatever of that thought I can process is related to these other ideas I’ve been writing about. The present feels so real, so permanent, so solid. Is that sturdiness an illusion that dissolves easily in the flow of moments so that all the things of this world are simply present in the here and now but specifically not guaranteed by their existence in that mode to survive into the future? I wrote before about 1718, but I could also have written about 1018, a full thousand years ago. The Vikings were in their fullest flower back then, raiding Scotland and the north German coast with relative impunity. But there were Jewish communities even then all across Europe, North Africa, and the Middle East. And the wives in those communities also had wedding bands and k’tubbot. You see where I’m going with this. Did Prince have it right back in the ’90s when he sung that “life is just a party…and parties aren’t made to last”? I wasn’t much of a fan, but maybe I should have been!
But some things do manage to last. And when they do, they become symbols—at least in my mind—of the way the apparently terminal ephemerality of things need not point to the conclusion that history itself is mere midrash, that that fact that the things of the past seem destined to vanish does not make of history itself something that also only exists in the recollective consciousness of humankind rather than as part of real, if absent, reality.

A tiny stone has surfaced that speaks directly to this set of issues…and also directly to me personally. 

It’s just a tiny thing, a pebble really. But because it was found in Jerusalem by archeologists working under the supervision of the Israel Antiquities Authority in dirt taken from beneath Robinson’s Arch (a site adjacent to the Western Wall, in 2013 and only now finally fully sifted) and because it has one single word etched into it, this mere pebble steps out of the flow of moments to speak to us and to remind us that, as Faulkner wrote in Requiem for a Nun, the past is not only not dead and gone, it’s never even really past at all.

The word is the Hebrew beka, the name of a specific weight. The first letter, the bet, is written backwards. Also, even more mysteriously, the letters are written from left to right rather than from right to left. Was the engraver dyslexic? Or was he (or she!) perhaps illiterate, thus someone merely copying letters without being able personally to read them and not doing too good a job? Or—given that there are several personal seals that have survived from the First Temple period that feature an analogous kind of mirror script— was the left-to-right thing intentional? To none of these questions is there an answer, nor ever will there be. What a beka itself was, though, isn’t hard at all to know: the measure is mentioned in the Torah twice, once to describe the specific amount of gold in the nose ring that Abraham’s man Eliezer offered to Rebecca when he first encountered her and realized that she was destined to be Isaac’s wife, and once in the context of the half-shekel annual tax that served as an annual count of adult Israelites: [you shall pay] one beka per head, [that is,] half a shekel…for each person over twenty years of age counted in the census.”
Eli Shukron, the lead archeologist on the site, explains the system clearly: “When the half-shekel tax was brought to the Temple during the First Temple period, there were no coins, so they used silver ingots. In order to calculate the weight of these silver pieces, they would put them on one side of the scales and on the other side they would place the beka weight. The beka was equivalent to the half-shekel, which every person from the age of twenty and up was required to bring to the Temple.” It appears, he goes on to say precisely, that the biblical shekel, a weight rather than a coin in the modern sense, weighed exactly 11.33 grams.

When I look at this coin, I really do get what Faulkner meant as this thing from ancient times surfaces in our own day to remind us that the past isn’t some sort of midrash on present-day reality invented by moderns to explain why things in the world are the way they appear to be, but an actual record of what once was.
When people debate the status of Jerusalem as though it somehow wasn’t the capital of Israel in ancient times, as though the Temple Mount somehow wasn’t the site of not one but two different Temples that served in their respective eras as the spiritual center of all Jewish enterprise, or as though the relationship of the Jewish people to the Land of Israel is some sort of colonialist fantasy as little rooted in reality as the Belgians’ claim to the Congo or the British claim to India—words somehow fail me as I feel overwhelmed by some unholy amalgam of contempt, irritation, and anger towards people for whom history really is whatever you wish it to be. And then support comes, as Scripture says it always does, from some unexpected corner of the universe. A stone—really just a pebble—appears in the world. It’s tiny. It weighs almost nothing. Its inscription was either intentionally or unintentionally written contrary to the normal rules of written Hebrew, or what we moderns suppose they must have been in First Temple times. It languished under a mountain of dirt not for centuries but for millennia. And then, out of nowhere, it rolls out onto centerstage and says its sole word: beka. And packed into those two ancient syllables are several things: the courage of my own convictions, the undeniable evidence of history confirmed, and no little amount of hope for the future. Not bad for something so tiny it is dwarfed by an average-sized human hand in the photograph above and, at the end of the day, has a vocabulary that consists of a single word. Not bad at all!

Thanksgiving 2018


For as long as any of us can recall, American Jews have celebrated Thanksgiving out of a deep sense of gratitude to God for any number of different things that define our lives in this place: the great prosperity of this land in which we share; the security provided for us and for all by our matchless and supremely powerful military; the freedoms guaranteed to all by a Bill of Rights that basically defines the American ethos in terms of the autonomy of the individual; the specific kind of participatory democracy that grants each of us a voice to raise and a ballot to cast; the freedom to embrace a minority faith—or any faith—without fear, reticence, or nervousness about what others may or may not think; and the inner satisfaction that comes from being part of a nation that self-defines in terms of its mission to do good in the world and to combat tyranny, oppression, and demagoguery wherever such baleful things manage to take root among the peoples of the world.
None of any of the above strikes me as being anything other than fully true, yet I can’t stop reading op-ed pieces and blog postings that posit that things have somehow changed, that the world now is not as it even just recently was, that it is the past and all its glories that shine bright now rather than the unknown—and unknowable—future, and that every one of the reasons listed above for us American Jews to join our fellow citizens in feeling deeply grateful for our presence in this place could just as reasonably be deemed illusory as fully real. And I hear those sentiments, interestingly enough, coming from people on both ends of the political spectrum as well as from all those self-situated just to the right or left of center. Nor are American Jews alone in their ill ease: if there is one thing vast swaths of our American nation seem able to agree upon, it’s that the age of great leadership belongs to history and that it is thus our destiny for the foreseeable future to be led by people whose sole claim to serve as our nation’s leaders is that they somehow managed to get themselves elected to public office. No one seems to dispute the fact that this is not at all a healthy thing for the republic. But expressing regret is not at all the same thing as formulating a specific plan to address the situation as it has evolved to date.

To keep this creeping malaise from interfering in an untoward manner as we prepare to celebrate our nation’s best holiday, I suggest we take the long view.
Frederic E. Church was a nineteenth century man, born in 1826 when John Quincy Adams was in the White House and dead on the 7th of April in 1900 as a new century dawned. He was also one of America’s greatest landscape painters, a member of the so-called Hudson River School and, in his day, one of the most celebrated artists alive. I mention him today, however, not to recall the larger impact of his oeuvre, but to tell you about one single one of his paintings, the one called “The Icebergs.”


As you can see, the picture (currently owned by the Dallas Museum of Art) is magnificent. But what made it famous in its day was specifically the way in which it was taken by many to capture the surge of self-confidence that characterized America’s sense of its own destiny at the end of the nineteenth century. One author, Jörn Münkner, characterized the painting’s appeal in this passage composed when the painting was put on exhibition at Georgetown University:
Frederik E. Church's "The Icebergs" pictured the Alpha and Omega of time and tide. It reflected the mid-19th century American world-view that was characterized by the belief in a “Manifest Destiny” according to which the United States…was the New Israel that had been prepared for by the divinity. 1861 saw the U.S. reigning from the Atlantic to the Pacific, from the Gulf of Mexico to the Great Lakes. Nature was regarded as holy and science as sanctified. The belief in the American Garden Eden whose very fortunes were guided by the Creator emanated out of the scientifically correct “The Icebergs.” It was the display of the rare and intoxicating American amalgam of science, religion, and nationalism. The relationship of the actual and the real that was concealed in the painting revealed the idea/fact that scientific thinking in America was shaped by a deep religious faith. Providence guided the scholarly painter's hand.

I find those words somehow inspiring and chilling at the same time, but I see what the author means: even after all this time, the painting hasn’t really lost its ability to suggest the majesty of nature or its timelessness. I get a bit lost on my way from that thought to the notion of manifest destiny inspiring America’s nineteenth-century rise to greatness (and, yes, the whole America as the new Israel is beyond peculiar, as surely also is the fact that the artist was thinking so expansively about American destiny on the eve of what in 1861 would still have been unimaginable carnage), yet I really can see the strength, the power, and the sense of ineluctable kismet mirrored in the majestic icebergs in the picture…and so finding in them a symbol both of America’s uniqueness and of its remarkable destiny is not as big a stretch as I thought at first it would be.
But other nineteenth-century types saw different things in the image of these gigantic icebergs afloat in an endless sea.

Edward Bellamy, once one of America’s most famous authors, has been almost completely forgotten. Yet his 1888 book, Looking Backward, was the third most popular American novel of nineteenth century, exceeded in fiction sales only by Uncle Tom’s Cabin and Ben-Hur. An early utopian novel, the book tells the story of one Julian West, a young man from Boston who goes to bed one night in 1887 and somehow manages only to wake up from his sleep in the year 2000. Some of the author’s predictions are uncannily correct—he depicts West as enjoying the almost instant delivery of goods ordered without having to visit any actual stores—while other things West finds in 2000, like a universal retirement age of 45, have not turned out quite as the author imagined they might. But it is the author’s postscript to his own work I want to cite here, as he imagines America in the future and uses his own version of the iceberg symbol to express his dismay. Almost definitely thinking of Church’s painting and the expansive optimism it inspired, he wrote as follows:
As an iceberg, floating southward from the frozen North, is gradually undermined by warmer seas, and, become at least unstable, churns the sea to yeast for miles around by the mighty rockings that portend its overturn, so the barbaric industrial and social system, which has come down to us from savage antiquity, undermined by the modern humane spirit, riddled by the criticism of economic science, is shaking the world with convulsions that presage its collapse.

This line of thinking I also understand: for all it appears mighty and invincible as it rises from the sea, icebergs are, after all, just so much frozen water. They melt as they float into warmer waters than can sustain them, which may (or may not) dramatically affect the ocean into which they dissolve but cannot affect the iceberg itself once it disappears into the sea and is no more.
So one image and two distinct interpretations. Of course, both are right. An inert, uncomprehending iceberg was powerful enough to sink the most sophisticated ocean liner of its day in 1912. And the semi-famous iceberg rather prosaically named B-15, which broke away from Antarctica’s Ross Ice Shelf in 2000, is about to melt into the South Atlantic Ocean. At 3,200 square nautical miles, B-15 is larger than the island of Jamaica. Yet its doom was sealed not by weapons of mass destruction or acts of God, but by the sea’s slightly too-warm water. (To read more, click here.) From this we learn that strength and weakness are not as unrelated as their antithetical nature makes them at first appear. Indeed, they are each other’s twins…and from that thought I draw the lesson I wish to offer to my readers for Thanksgiving Day in the Age of Anxiety.

Our nation is currently divided down into people who see America’s great and mighty presence in the world pointing to a remarkable destiny framed by our nation’s ongoing commitment to the foundational principles upon which the republic was founded and still rests. Such people look at Church’s painting and are heartened by what they see because solid, powerful, majestic icebergs afloat in the sea remind them of our nation, its strong moral underpinnings, its commitment to (the American version of) tikkun olam, and its invincible military. This group includes members who vote red and who vote blue, but others see our nation coming apart at the seams, a country divided down into warring factions in which personal liberty is increasingly defined in terms of the sensitivities of the majority and in which justice is meted out entirely differently to people of different races and social strata. Such people look at Church’s painting and hear Bellamy’s warning that even giant icebergs that look stable and impregnable can be undermined by the gentle, unarmed presence of a warm current in the sea. Nothing lasts forever. Every Achilles has his heel. No garden thrives because it was once watered.  
So who is right? I propose we give the last word to Bellamy himself, whose afterword to his own novel (which I am currently reading for the first time) closes with these words: “All thoughtful men agree,” he writes, “that the present aspect of society is portentous of great changes. The only question is whether they will be for the better or the worse. Those who believe in man’s essential nobleness lean to the former view, those who believe in his essential baseness to the latter. For my part, I hold to the former opinion. Looking Back was written in the belief that our Golden Age lies before us and not behind us, and is not far away. Our children will surely see it, and we too who are already men and women, if we deserve it by our faith and by our works.”

Despite it all, that’s what I think too! And I offer that thought—part prayer, part wish, part hope—to you all on this Thanksgiving Day, a day on which all Americans are united by the desire to recognize the good in ourselves and our nation, and to be grateful for the potential to do good in the world that derives directly from that noble sense of what it means to be an American.

Thursday, November 15, 2018

Hatred, Fear, Hope

Like most Jewish Americans, I was caught off-guard back in 2017 by the sight of white supremacists marching in Charlottesville, Virginia, and carrying aloft the flags of the Confederate States of America and Nazi Germany.  (That they were also carrying the so-called Gadsden Flag that was originally used by the Continental Marines during the American Revolution—the one designed back in 1775 by Christopher Gadsden featuring the words “Don’t Tread on Me” beneath a coiled-up, scary-looking rattlesnake—struck me primarily as a sign of how little these people know about the values upon which the nation was founded in the first place.)  The sight of those flags being held aloft proudly and defiantly was beyond upsetting, but not particularly confusing. But what was confusing—to me and I suspect to most—was the chant “Jews will not replace us,” which I hadn’t ever heard before and which I now realize I misunderstood, taking it to mean something entirely different than what it apparently does mean.

Taking the slogan at what I thought was face value, I understood the marchers to be declaring their determination not to allow themselves to be replaced by Jews eager to take over their jobs and leave them without work and eventually destitute.  In other words, I imagined this somehow to be tied to the marchers’ skittishness about the job market and their need to find someone to blame in advance for losing jobs they fear they only haven’t lost yet and in which they fear they will eventually, to use their own word, be “replaced.” It hardly seems like a rational fear, but that’s what it felt like it had to mean, and so I ended up taking it as just so much craziness rooted not in anything corresponding to actual reality but in the malign fantasy that, left unchecked, we Jewish people will somehow take over the world and install our own people in whatever jobs we wish without regard to where such a move would leave the people currently holding them. And that is what I sense most Jewish people—and maybe even most Americans—hearing this chant took it to mean.
But now that I’ve read more, I see that that is specifically not what “Jews will not replace us” means and that the slogan specifically is not about Jews replacing Christians at work at all. Instead, the chant encapsulates the marchers’ fear that we Jews are working not to take over their jobs ourselves but to replace them at work with third-party others chosen specifically to deprive them of their livelihoods and their places in society. And who might these other people be? That, it turns out, is where anti-Semitism and racism meet: the hordes of jobseekers the marchers fear turn out not to be Jews at all, but hordes of dark-skinned immigrants feared already to be pouring over our borders and insinuating themselves into an already-tight job market. And it is those people who, because they are presumed ready to work at even the most menial jobs for mere pennies, are imagined to be threatening the white (i.e., non-immigrant) people who currently hold those jobs and who earn the American-sized salaries they use to support themselves and their families.

To say this is crazy stuff is really to say nothing at all. Yes, we have a huge and so-far-unresolved issue in this country with illegal aliens living in our midst and I’m sure that those people do take jobs that legal residents might otherwise have. And lots of non-crazy people, myself definitely included, are eager to find a way out of this morass that we ourselves have created by failing to police our borders adequately and by allowing the number of undocumented illegals in our midst to grow from a mere 760,000 or so in 1975 to something like 12.5 million today with no obvious solution in sight.
So wanting a reasonable solution to be found—one that is fully grounded both in settled U.S. law and in our national inclination to be just, fair, kind, and generous, and one that doesn’t make after-the-fact chumps out of all those countless millions of people who followed all the rules and immigrated here fully legally—is not crazy at all. What is crazy is the fantasy that Jewish Americans somehow possess the secret power to order Walmart’s and Costco and every other American business to fire specific employees and replace them with pre-selected others regardless of whether those others are or are not here legally. Crazier still is the contention that American Jews somehow control American immigration policy, and that we are somehow able imperiously to issue instructions that must be obeyed both to Democratic and Republican administrations. But craziest of all is the belief that, precisely because American Jews are so supremely powerful, we must be attacked violently before we order the administration to let even more immigrants into our nation. That, after all, was the specific reason the Pittsburgh shooter gave for his savagery in a comment posted online just before the attack: to give the officers of HIAS pause for thought before they work to bring in any more “invaders [to] kill our people.” My post-Pittsburgh proposal is that we stop dismissing that line of thinking as aberrant looniness that no normal person could actually embrace and start taking it far more seriously.

It feels natural to consider the various kinds of prejudice that characterize our society as variations on a common theme. And in a certain sense, I suppose, that is true. But these pernicious attitudes are also distinct and different, both in terms of their root causes and the specific way they manifest themselves in the world: misogyny, racism, and homophobia, for example, are similar in certain cosmetic ways, but differ dramatically in terms of the specific malign fantasies that inspire them and thus should (and even probably must) be addressed in different ways as well. And we should also bring that line of thinking to bear in considering anti-Jewish prejudice: similar in some ways to other forms of prejudice, anti-Semitism also has unique aspects that it specifically does not share with other forms of bigotry. Indeed, the fact that the anti-Semitism put on public display in Charlottesville was rooted in the haters’ groundless yet powerful fantasy about the almost limitless power imagined somehow to have wound up in the hands of the hated is all by itself enough to distinguish anti-Semitism from other kinds of prejudice. And not at all irrelevant is that it appears not to matter at all how impossible it feels to square that fantasy about Jewish powerfulness with the degree to which powerless Jews have suffered at the hands of their foes over the centuries, and particularly in the last one. In that regard, I would like to recommend a very interesting essay by Scott A. Shay, the author and Jewish activist, that was published in the Pittsburgh Post-Gazette a few days after the shooting at Tree of Life Synagogue and which readers viewing this electronically can access by clicking here.  

Nor is this a problem solely of one extreme end of the political spectrum. In the wake of Pittsburgh, the spotlight is on the anti-Semitism that characterizes the extreme right, but the same light could be shone just as brightly on the anti-Semitism of the extreme left…and particularly when it promotes hostility toward Israel’s very right to exist and to defend itself against its enemies. Indeed, the assumption that Israel—instead of being perceived as an outpost of democracy smaller than New Jersey trying to survive in a region in which it must deal with nations and political terror groups that openly express their hope to see Israel and its Jewish population annihilated—is perceived as an all-powerful Goliath seeking to eradicate its innocent opponents militarily rather than to negotiate fairly or justly with them, is part and parcel of this fantasy regarding the power of the Jewish people. Coming the week after Hamas fired over five hundred missiles at civilian targets in Israel, each capable of killing countless civilian souls on the ground, the image of Israel as the aggressor in its ongoing conflict with Hamas sounds laughable and naïve. But maybe we should stop laughing long enough to ask ourselves how this myth of Jewish power—whether focused on American Jews imagined to be in control of American foreign policy or Israeli Jews imagined to be intent on crushing their innocent victims for no rational reason at all—perhaps we should ask ourselves how we might address, not this or that symptom of the disease, but the disease itself.
Distinct (at least in my mind) from theological anti-Semitism rooted in the supersessionist worldview promoted for so long by so many different Christian denominations, this specific variety of anti-Semitism seems rooted not in messianic fervor but in fear. And that, I think, is probably how to go about addressing it the most effectively: by pulling that fear out into the light and exposing it as a fantasy no less malign than inane. By forcing young people drawn to the alt-right to look at pictures of the innocents murdered in Pittsburgh and to ask themselves if they truly have it in them to believe that U.S. government policy was until two weeks ago being dictated by 97-year-old Rose Mallinger or by Cecil or David Rosenthal, both gentle, disabled types whose lives were built around service to their house of worship. By forcing young people poisoned with irrational hatred of Israel to look at the portraits of the 1,343 civilians murdered by Palestinian terrorists since 2000 and to see, not predators or fiends, but innocent victims of mindless violence. By insisting that young people drawn to fear Jews and Judaism be exposed to the stories of Shoah victims—and, if possible, to surviving survivors themselves—and through that experience to understand where groundless prejudice can lead if left unchecked and unaddressed.

To hope that no one is drawn to extremism is entirely rational, but it really can’t be enough. Just as young people who seem drawn to a racist worldview should be forced—by their parents and their teachers in school, or by society itself—to look into the eyes of those poor souls gunned down in the Emanuel A.M.E. church in Charleston on June 17, 2015, after welcoming their murderer into their midst for an hour of Bible study, so should society itself rescue young people from themselves once they are perceived to be embracing the kind of anti-Semitism that led directly to Pittsburgh…and be forced to confront the bleak hatred that has taken  root in their hearts and to see it for what it is: a fantasy rooted in fear that can be overcome and eradicated by anyone truly willing to try.

Thursday, November 8, 2018

Looking Back and Forward


So the much-anticipated midterm election came and went, leaving all Americans, regardless of party affiliation or political orientation, finally united on at least one point: that the Congress, now a bicameral house formally divided against itself, will accomplish nothing at all for the foreseeable future...unless its members can find it in their hearts to compromise with their opponents and to craft legislation so little extreme and so overtly and appealingly reasonable that people on both sides of the aisle will fear angering their constituencies by not supporting it. How likely is that to happen? Not too!  Still, that thought—that in the absence of flexibility, tractability, and generosity on the part of all, nothing at all will be accomplished and no one will have a record (other than of obstructionism) to run on in future elections—has a sort of silver lining in the thought that whatever legislation is passed by the new Congress will have to be of the rational variety that Americans of all political and philosophical sorts can support. So there’s at least that!
As my readers all surely know by now, my training—my academic training, I mean, as opposed to my spiritual training in rabbinical school—is in ancient history and the history of ancient religion. And I’ve been reading just lately some interesting analyses of the mother of all democracies, the one set in place to govern the city-state of Athens, and the specific way our American democratic system does and doesn’t preserve its ancient features and norms. Obviously, a long road stretches out between them and us! Even so, however, there are at least some features of Athenian democracy that are definitely worth revisiting.

Some of the specifics will be unexpected to most. Ancient Athens was governed by a council of 500 called the boulé whose members were chosen—not by an informed electorate casting ballots for the candidate of their choice—but by lots so that fifty men chosen at random to represent each of the ten tribes of ancient Athenians were put in place and handed the reins of government. Each served for one year, but no one could serve more than once a decade nor could any citizen serve more than twice ever. The boulé had its own hierarchy, however: its in-house leadershipcalled the prytany—consisted of fifty men, also chosen by the casting of lots, who served for one single month and were then replaced.  The idea was simple—and not entirely unappealing: by choosing both the people’s leaders and those leaders’ leaders at random, it was certain that the power of governance would specifically not rest neither with power-hungry people eager to rule over or to dominate others nor with anyone motivated by the possibility of personal gain through service to the nation. The leaders of Athens were thus disinterested parties, people with no specific yearning to be in charge yet whom fate somehow arbitrarily put into positions of leadership nonetheless. Yes, it was surely true that the inevitable blockhead would occasionally end up chosen to serve, but such a person would be vastly outnumbered by more thoughtful, more reasonable individuals. (The boulé did have five hundred members, after all.) The system has an antique feel to it, the specific point of keeping power out of the hands of people who lust after it and firmly in the hands of people who would be happier doing something else entirely, not so much!
The situation that prevailed in ancient Athens appeals in other ways as well. The boulé, for example, lacked the power to make any final decisions on its own. To do that, all citizens were invited to participate in a forum called the ekklesia that met every ten days for the specific purpose of ratifying any of the boulé’s decisions before they became law. (This body met on the Acropolis as well, in an area called the Pnyx.) All citizens were automatically members of the ekklesia and were welcome to speak up and participate in pre-vote debate and discussion. So the power was thus fully vested in the people—the boulé could pass all the bills it wanted but none of them could become law until the people signed on.



Etymologically, the “demo” in “democracy,” from the Greek demos, references the full citizenry, the people of the nation who self-governed not by electing people to govern them, but by governing the governors and by requiring that the decisions of the boulé be ratified by the public. Is this sounding at all appealing to you? The more I think of it, the more remarkable it sounds to me…and, yes, in some ways intensely appealing. Would this work in a nation of 328 million citizens like our own? Not without some serious adjustment—but the notion that the very last people to whom power should ever be granted are those specific individuals who yearn the most intensely for it, that idea has some serious merit in my mind!
And then there was the concept of “ostracism,” which I think we should definitely consider bringing back. The English word means exclusion from a group, usually because of some perceived scurrilous misbehavior. But the word goes back to Athens, where it denoted something far more specific: the right of the citizenry, the demos, one single time in the course of a year to vote to expel from the city for a period of ten years anyone perceived as having become too powerful—and thus who merely by being present in the city weakened the democratic principle of power being vested fully in the hands of the people. It didn’t happen every year, but once the decision was taken—and if more than six thousand citizens voted to ostracize by writing the name of the individual they wished to see gone on a piece of broken pottery called an ostrakon—then the “ostracized” individual was forced to leave the city and not permitted to return for at least a decade. There was no possibility of appeal. Ostracized individuals were then given ten days to organize their affairs and then to leave and not to return for ten years. There was a certain risky arbitrariness to the whole process—there was no obligation for any citizen to state why he was voting to ostracize whomever it was he was voting to exile and there was no judge or jury—but also something exhilarating about a procedure designed to place the power in the hands of the people to exile anyone at all (including civic leaders, generals, the wealthy, and the city-state’s most influential citizens) for fear that that specific individual was exerting a malign influence on the right of the people to self-govern. And there was at least one profound safeguard against abuse in the fact that the ostracized individual had to be voted off the island by six thousand citizens. Even so, the procedure eventually died out. (The last known ostracism was towards the end of the fifth century BCE.) But it is also thrilling to imagine a democratic city-state in which anyone who yearns for power must temper such yearning with the knowledge that being perceived to be acting other than in the best interests of the people could conceivably lead to being sent away regardless of the immensity of one’s fortune or the breadth of one’s influence. 


There were darker sides to Athenian democracy as well. Citizenship was limited to males over the age of eighteen; women were completed excluded both from membership in the boulé and from participation in the ekklesia.  Nor did all citizens choose to participate fully in their fully participatory democracy. Indeed, most citizens failed to show up most of the time. To increase attendance, in fact, a decision was made around 400 BCE to pay citizens who showed up for their time, thus making it more reasonable for members of the working class to take the time off to attend. But the fact remains that, just as in our American republic, the power was in the hands of those who chose to exercise their civic right to participate and not in the hands of those who chose to express themselves merely by complaining about the status quo. Is that a flaw in the system? I suppose it would depend on whether you ask the voters or the complainers!
This isn’t ancient Greece. But what we can learn from considering the political heritage bequeathed to us by the Athenians is that democracy is not manna from heaven offered to some few worthy nations and not to others, but an ongoing political theory that needs constantly to be revised and reconsidered as it morphs forward through history. There is no end to the books I could recommend to readers interested in learning more, but I can suggest two titles that I myself have enjoyed and that would be very reasonable places to start reading: A.H.M. Jones’ book, Athenian Democracy, first published back in 1986 by Johns Hopkins University Press and read by myself years ago, and also a newer book, Democracy in Classical Athens by Christopher Carey, published in 2000 by Bristol Classical Press in the U.K. 

Thursday, November 1, 2018

Pittsburgh

Like many of you, I’m sure, I took biology in tenth grade. It was a long time ago. I barely remember high school, and specific classes even less than the general experience, but I do remember one specific incident in biology class that made a huge impression on me then and remains with me still. I’m sure it’s no longer allowed, as well it shouldn’t be, but the experiment itself was simple enough. A frog was set into a petri dish filled with cool water. The frog looked happy enough, having no concept of what was soon to come…and also not able to extend its neck far enough out to see that the petri dish itself was being held aloft by a black metal frame that also housed a Bunsen burner positioned just beneath the dish in which the frog was seated. (Do frogs even have necks?) The frog could have hopped out at any moment. But why would he have? He was content, he was (he thought) among friends. It was the fall following the summer of love. I have the vague sense—although this can’t possibly be true—that “Strawberry Fields” was playing softly in the background.

And then our teacher, whose name I’ve long since forgotten, lit the Bunsen burner and the fun began. The flame was low enough so that the water would only heat very slowly, incrementally, almost unnoticed by us…but also not by the frog in the dish. The point of the experiment was simple enough: to demonstrate that, if the water were only heated up slowly enough, the frog would actually be paralyzed by the heat and thus unable to avoid the sorry end that appeared to await him and which in fact actually did await him even though he could easily have escaped his fate earlier on had he understood things more clearly. Or she could have. It really was a long time ago.
The world is full of frogs in petri dishes.

Facebook started out as a pleasant way for friends to stay in touch and then grew into something that would surely have been unrecognizable to the people dreaming it up in Mark Zuckerberg’s dorm room. And, somewhere along the way in that amazing growth from 1 million users in 2004 to 2.2 billion active users at present, a line was crossed that cannot be crossed back over, and which thus obliges Facebook to deal somehow with the unexpected and surely unwanted ability it somehow possesses to be manipulated by its own users to influence elections and to invade people’s privacy in a way that many savvy users still can’t entirely fathom in all of its complexity.
The whole concept of on-line DNA analysis started out as a clever way for people to learn more about their families’ histories and about their own genetic heritage. But as the data banks at ancestry.com, 23andme.com, and other analogous sites grow larger and larger on a daily basis, a line has been crossed there too that cannot be uncrossed and which will now oblige us all to deal with the ability of scientists, including (presumably) those who work for the government, to invade the privacy of people wholly unrelated to the enterprise and who themselves haven’t ever signed up or sent in a sample of their DNA for analysis. (To revisit what I wrote about this truly shocking phenomenon a few weeks ago, click here.)

Kristallnacht, the eightieth anniversary of which falls next week, was another such frog-in-a-petri-dish line. Things were dismal for the Jews of Germany and Austria long before 1938, but Kristallnacht—in the course of which single evening almost 2000 synagogues were destroyed, 2550 Jewish citizens died, 30,000 Jewish men were arrested and sent to concentration camps, and tens of thousands of Jewish businesses were plundered—made it kristall clear that whatever Jewish souls fell under Nazi rule were on their own and that that line into a dark, almost unimaginable future was one that simply could not be crossed back over. Indeed, the worst part of Kristallnacht was not the pogrom itself, as horrific as it was, but its implications for the future and the unavoidable conclusion to be drawn from the events of that gruesome night that there apparently was no level of anti-Semitic violence that the world could not somehow learn to tolerate. Kristallnacht, of course, did not come out of nowhere. Nazi anti-Semitism was hardly a secret. By 1938, the Jews of the Reich had been subjected to ever-increasing levels of degradation, humiliation, and discrimination for years. Obviously, they all noticed it, just as the frog in my classroom must surely have noticed the water warming as well. What the frog failed to grasp was that there was going to be a specific moment at which his ability to hop out of the dish was going to be gone and that he would have no choice but to meet his fate in that place. And that is what the Jews who had bravely decided to weather the storm in place also failed to seize until it finally was too late to do otherwise and their fates were sealed, their doom all but assured.
Is Pittsburgh that line in the sand that we will all eventually see clearly for what it was? Or was it just a terrible thing that an awful person with some powerful guns managed to accomplish before he was finally subdued by the police? The answer to those questions lies behind the answers to others, however. Was Pittsburgh more about the rise of the so-called alt-right than about anti-Semitism per se? (The Anti-Defamation League noted that there was almost a 60% rise in hate crimes directed against Jews or Jewish targets from 2016, the year of the presidential election, to 2017, the year of Charlottesville. No one doubts that the statistics for 2018 will be higher still.) Or is this more about guns than Jews?  We have become almost used to gun violence in our country—we actually name the incidents (Columbine, Orlando, Sandy Hook, Parkland, Fort Hood, San Bernardino, etc.) because it would otherwise be impossible to keep track of them all—so it feels possible to explain Pittsburgh (or rather, to explain it away) as just one more notch on that belt rather than as a decisive moment in American Jewish history. But is that reasonable? Or is Pittsburgh less about Jews or guns, and more about the way that houses of worship seem specifically to enrage a certain kind of American bigot, the kind who can spend an hour studying Bible with gentle, harmless church folk and then take out a gun and methodically attempt to kill all the others in the class?

Or is this something else entirely? That’s the question I found churning and roiling within as I contemplate the events of last Saturday in Pittsburgh and try to make some sense out of it all.
It’s interesting how the most accessible studies of anti-Semitism—Léon Poliakov’s The History of Anti-Semitism, Edward Flannery’s The Anguish of the Jews, David Nirenberg’s Anti-Judaism: The Western Tradition, Bernard Lazare’s Antisemitism, Its History and Causes, Rosemary Ruether’s Faith and Fratricide, and Daniel Jonah Goldhagen’s The Devil that Never Dies, just to name the books I personally have found the most rewarding and informative over the years—it’s interesting how little read or discussed these books are, including specifically by the very Jewish people who should constitute their most enthusiastic audience. Is that just because they are incredibly upsetting? Or is there a deeper kind of denial at work here, one rooted in a need to feel secure so intense that it simply overwhelms anything that might disturb people who live in its almost irresistible thrall?

I was a senior in college when I first read André Schwarz-Bart’s, The Last of the Just. It is one of the few works of fiction I’ve read many times, both in French and English, and is surely among the most important works of fiction I’ve read in terms of the effect it had on me personally in terms of shaping my worldview. (It also led, albeit circuitously, to my choice of a career in the rabbinate.) The book, in which are depicted episodes from the life of one single Jewish family from 1190 (the year of a horrific pogrom in York, England) to 1943 (when the family’s last living scion is murdered at Auschwitz), is upsetting. But it is also ennobling and, in a dark way that even I can’t explain entirely clearly (including not to myself), as inspiring as it is disconcerting. It was once a famous book—the first Shoah-based book to be an international bestseller and the winner of the very prestigious Prix Goncourt in 1959—but has fallen off the reading list of most today: how many young people have even heard of it, let alone have actually read it? I suppose people still read Anne Frank’s diary and Elie Wiesel’s Night, the two most prominent books about the Shoah of all…but both books are tied to their author’s specific stories and neither is “about” anti-Semitism itself in the way Schwarz-Bart’s book is. In my opinion, that’s why they have remained popular—because they’re basically about terrible things that happened to other people—and The Last of the Just hasn’t.
What should we do in the wake of the Pittsburgh massacre? Clearly, we need to find the courage to speak out and to say vocally and very strongly to our elected officials that we cannot and will never accept that this kind of thing simply cannot be prevented in a society that guarantees its citizens the right to bear arms. And, just as clearly, we need to make it clear to the world that this kind of aggression, far from weakening us, actually strengthens us and helps us find the courage to assume our rightful place in the American mosaic. But we also need to lose our inhibitions about learning about our own history. Pittsburgh was about the recrudescence of the kind of anti-Semitic violence many of us thought to be well in the past. To understand the deeper implications of Jews at prayer being murdered in their own synagogue, we don’t need to read any of the million statements issued by public officials, Jewish and non-Jewish organizations, and countless individuals over the last few days. What we do need to read is Schwarz-Bart and Ruether, Nirenberg and Flannery, and to internalize the lessons presented there. And we need take the temperature of the water in our petri dish and only then to negotiate the future from a position of informed strength characterized by a clear-eyed understanding of what it means to be a Jew in the actual world in which we live.