Thursday, December 22, 2011

Chanukah 2011


So what’s the first thing you think of when someone mentions Chanukah? I suppose most of us would instantly conjure up a mental snapshot of a family—perhaps even our own family—gathered around the menorah, lighting the candles, and singing Maoz Tzur. Some of us might think first of the miracle of Chanukah, of the old story we all know about the tiny cruse of oil that should only have held enough oil for one single day but out of which miraculously poured eight times that much…so that the great menorah in the almost-inmost sanctum of our holy Temple could not only be relit, but would remain lighted until fresh oil could be prepared under the watchful supervision of the High Priest of Israel. Still others will think of the more gustatory trappings of the holiday: chocolate Chanukah gelt, fried latkes, deep-fried jelly doughnuts, or some other set of semi-poisonous delights we all seem to be entirely able to consuming without an ounce of guilt (perhaps I’m saying more here about myself than I meant to) for the course of eight long, cholesterol-laden days of family togetherness.

Myself, I try to spend at least some time in the course of the holiday thinking about the shikkutz m’shomeim. The what? For something that rests at the very heart of the holiday, it’s odd how few people even know about the riddle of what the shikkutz m’shomeim actually was—or even who have heard of it. But the shikkutz is at the very center of the story we tell, or at least it should be…and if its precise identity remains a mystery none has yet solved in a universally convincing way, then the riddle itself constitutes a puzzle we have lost interest in solving only to our own detriment. I’m guessing most of my readers won’t ever have heard of it. In some books it appears in its ten-dollar English-language version as the “abomination of desolation.” Does that help? Some translations offer the even less decipherable “desolating sacrilege.” Is that better? I didn’t think so. But the shikkutz m’shomeim is not only something you should know about, but it’s something I think we could all profit from discussing seriously.

The historical sources for our Chanukah festival aren’t that many. And they aren’t in agreement about many minor details of the story and a handful of truly important ones. There is the ancient book called the First Book of the Maccabees, written in Hebrew towards the end of the second century BCE by a Jewish author in Maccabean Jerusalem who wished to record the events that led to Jewish independence from the Seleucid empire (the sort-of-Greek empire from which the Maccabees wrested, if not de jure independence, than at least the de facto right to behave formally as an independent state) in the decades immediately prior to his own day. There’s the work confusingly called the Second Book of Maccabees—confusingly because it has no literary or historical relationship to the First Book, from which it is an entirely distinct work—which was written in Greek, probably in Alexandria, in the same time frame as First Maccabees, but which is itself only a one-volume summary of a much longer work in five volumes penned by one Jason of Cyrene (in today’s Libya) recounting the history of the Maccabees from the perspective of the Greek-speaking diaspora. (The Jews of Egypt and Libya spoke Greek in those days; it was their translation of the Bible into Greek, in fact, that today is the oldest still-extant full translation of Scripture into any language at all.) Josephus, the first century Jewish historian, also took a crack at the Maccabees, discussing their story in detail in his Antiquity of the Jews and drawing, apparently, on many no-longer-extant sources. And then there’s the Book of Daniel, the sole source for the story that actually is in the Bible.

The Book of Daniel is a complicated work that was clearly put together from several anterior sources, but which almost definitely reached its final form—the form in which we can find it in any Bible—in Maccabean Jerusalem. The reasons scholars think that would take us too far afield here for me to discuss in detail, but the short version is that the last few chapters of the book, written in obscure, gnomic Aramaic that only a true literary or historical sleuth could love, appear to be discussing not the story of Daniel, the personality featured in the first part of the book who lived centuries earlier at all, but rather the events of the Maccabean revolt itself. And at the center of that account, as well as the account in First Maccabees, is the shikkutz m’shomeim that is what I want to write about today.

At the heart of all these stories is the notion that the Temple was desecrated and then restored to its original state of unsullied purity through the brave actions of the Maccabees, itself a term of obscure origin that came to denote the five brothers who led the revolt against the Seleucid emperor Antiochus IV. (Did you know, by the way, that there are reliable portraits of Antiochus? He’s the only person in the Chanukah story to have left behind pictures of himself, mostly on coins he had minted with his own image stamped on them, on of which is featured above.) That part, we all know: some version of the purification and rededication of the Temple is at the heart of every version of the Chanukah story. Indeed, the name of the holiday itself means “Dedication” (in this sense, “Rededication”) and references that specific event. But what exactly was going on in the Temple during the years leading up to its recapture, repurification, and rededication?

Let’s listen to the author of the First Book of Maccabees, describing the edicts set in place by the king to buttress those Jews who wanted to “reform” Judaism by turning it into a Hellenistic cult:
The king sent letters by messengers to Jerusalem and the cities of Judah ordering that the citizenry should follow strange new laws. He forbid the sacrifice of traditional burnt offerings and libations in the Temple, and demanded that the Jews profane the Sabbaths and festivals. Furthermore, he ordered that the sanctuary be polluted, and that there be set up altars, sacred groves, and special chapels devoted to the worship of Greek idols, and that in the Temple they sacrifice swine's flesh, and unclean beasts. Moreover, the king commanded that the Jews leave their sons uncircumcised, and make their souls abominable with all manner of uncleanness and profanation to the end that they might forget the law, and abandon its ordinances. And whosoever would not do according to the commandment of the king, the king further said, he should die. To that end, the king appointed overseers over all the people, commanding the cities of Judah to worship only in accordance with these new regulations. As a result, many evils were perpetrated in the land…and then, on the fifteenth day of the month of Kislev, in the one hundred and forty and fifth year (of the Seleucid empire), he had the abomination of desolation set upon the altar, and altars built dedicated to the Greek gods throughout the cities of Judah on every side. They burnt idolatrous incense at the doors of their houses, and in the streets. And when they had rent in pieces the books of the law which they found, they burnt them with fire.

That’s pretty strong stuff. But what exactly was this “abomination of desolation” that was set up upon the altar itself in the Temple? That, the author forbears to say. The Book of Daniel is no clearer. Cast here as a prediction rather than as a historical account, the author imagines old Daniel looking centuries in the future and seeing his, the author’s own day: “They (in context, the armies of an alien, yet unnamed king) shall profane the sanctuary…and take away the daily sacrifice, and they shall set up the abomination of desolation in that place. Furthermore shall (this king) corrupt by flatteries those who do wickedly against the covenant; but the people who know their God shall be strong, and eventually they shall take action….” But what actually was it? That neither author wishes to say.

Some scholars, relying on an old rabbinic tradition that permits mentioning the names of idols only when those names are deformed in some clever way so as simultaneously to insult the gods they are imagined to represent, that the Hebrew shikkutz m’shomeim was meant to reference Zeus, whose name in Aramaic was sometimes Baal Shamayim, “the Baal of heaven.” The word shikkutz would then be an insulting reference to Baal, just as the term has survived in the vulgar speech of some North American Jews in a slightly bowdlerized version not used in decent discourse. And the m’shomeim part would simply be a pun on Shamayim, not referencing the god as being “of Heaven,” but as being destructive and repulsive. So okay, it’s an insult…but the question of what the thing itself actually was remains unanswered. Some imagine it to have been a statue of Zeus that was set up atop the altar so that every animal sacrificed there would be offered up beneath the stony gaze of the chief of the Greek pantheon. Other scholars have imagined it to be a meteor of some sort, or to reference the pigs themselves that were now to be offered up in the Temple as a sign that the Jews had signed on to one of the cardinal elements of Hellenistic philosophy: that, there being only one God, it would be a sign of brotherhood for all to worship God in his most elevated and sophisticated manifestation as mighty Zeus, the name given him by the most elevated and sophisticated of his followers, the Greeks themselves.

The whole miracle story featuring the tiny cruse of oil appears first in the Talmud and has no real antecedent in any of these contemporary, or near contemporary sources, all of which understand the great accomplishment of the Maccabees to have been the removal of the shikkutz from the Temple. (There’s a different story featuring a miracle regarding Temple oil at the beginning of the Second Book of Maccabees, but it’s entirely different from the story we all know from the Talmud.) Whether it was an actual statue, or some other thing that so revolted the ancients who knew exactly what it was that they could not bring themselves to say its name aloud or to describe it other than cryptically, who knows? But in the contemplation of that riddle lies a lesson for us all.

What I get from this whole story about the shikkutz m’shomeim is that even the most sacred precincts can have introduced into them items that turn them from places of pious worship to places of grotesque depravity. The place, the sanctum, the sanctuary, therefore, is only space. Holy space, perhaps…but only when holy things happen there. To suppose, therefore, that the mere existence of a sanctuary is enough to guarantee that all that unfolds there is by definition sacred work…that is, if anything, the precise opposite of the lesson these ancient sources gather (at least in my own mind) at this time of year to remind us. For a community to be worthy of the designation of k’hillah k’doshah, it needs to do holy things, to do holy work, to seek to know God not merely by existing in some room designated as holy space, but to take to heart the ideals of faith that are preached in that place…and then to act on those ideals to nudge the world even just slightly close to the messianic moment that will herald the redemption of humanity. The Temple retained its sanctity, of course, even when the shikkutz was in place. But it was a dormant value in those dark days, not something that existed actively but only passively within the folds both of its history and its destiny. That could be a satisfying thought…but the Maccabees didn’t think it was enough and neither should we. To be worthy of being called a “holy community,” a k’hillah k’doshah, its members must further God’s work on earth.

Thursday, December 15, 2011

Adventures in Facebookland


I don’t have a Facebook page. For a long time, I considered that a mere detail, something to be owned up to if and when someone else asked about it, but otherwise one tiny item on a very long list of things I don’t have one of. Some of the things on that list, mind you, surprise even me, at least a little. I like to skate, but I don’t own a pair of ice skates. I like Mozart, but I don’t own a boxed set of the piano concertos. (When necessary, I borrow the cantor’s.) I am entering my second decade as editor of the quarterly journal, Conservative Judaism, but I don’t own a scanner or a fax machine. It never seemed odd to me, however, that I didn’t have a Facebook page. Joan has one, but she almost never goes there to see who’s posted what on her wall or formally to ignore who has invited her to be his or her friend. (She prefers, I believe, passively to ignore them by not making herself aware of their invitations in the first place.) I know what Facebook is, more or less. From time to time, I open Joan’s page to see what’s new with whom (and, of course, to spy on my children, although it’s hardly spying if everybody else in the universe can also see what’s on their pages), but I’ve never been drawn to the experience especially and certainly nowhere near arrestingly enough to want personally to dive into those waters. Maybe it’s an age thing: Facebook users in my age category constitute a mere 5% of the total number of users, whereas more than three-quarters are between the ages of 13 and 34.

All that being the case, you can imagine my response when I noted in the paper this week that something like two-thirds of the entire population of the United States have Facebook pages. Talk about being left behind! And those 200,000,000 people are only a quarter of the world-wide total of 800,000,000 users. That’s a lot of faces! And those, so the Facebook people themselves, are only active users. People who opened up pages once but then never visited them again are not counted. Nor are people who visit so infrequently as not to show up as “real” users at all in their statistics. It’s a big number. Of the countries of the world, only India and China have larger populations than Facebookland. The third largest country in the world in terms of population, our own, has fewer than half the number of citizens than Facebook has people signed up. The world has more people signed up for Facebook than all the countries of Europe together have citizens. (By comparison, Twitter has a mere 380 million users, Linked-In a mere 100 million.) You get the picture. A big number. A lot of people. If Facebook were a country, it wouldn’t be Liechtenstein.

And also a lot of money. Facebook is the third largest web-based business in the United States, right behind Google and Amazon. Facebook’s value was estimated a year ago at about $14 billion dollars. And now they’re making plans to go public, and hoping to raise about $10 billion in the process. No matter how you measure it, that’s a lot of money for a company that was only launched in February of 2004, not a full eight years ago, and which (and, yes, I know how old this makes me sound) doesn’t actually make anything at all. Except friendships. Sort of.

And that brings me to the topic I’d like to raise this week for discussion. Friendship, for all it feels like a basic feature of human life, is an elusive thing both to define and to cultivate. What is it exactly? We all have friends, obviously. And we understand that being someone’s friend is qualitatively different than being someone’s relative (or someone’s employee or neighbor or love interest or business partner). But what are the essential traits that distinguish friendship from other kinds of relationships? Has the concept evolved naturally over the millennia that human beings have been befriending each other? If so, then is the great innovation brought by Facebook to its four-fifths of a billion users—the notion that friendships have no natural course, that friendships are only dormant but never actually defunct and can always be resurrected, that one’s friends can always be located and (since they’ve already—and apparently permanently—been befriended) friended? Or maybe the right term in such a case should be re-friended, in which usage the re- (just like in refried beans) hardly means anything at all since they were, so the fantasy, friends all along anyway.

For the nation that gave Facebook to the world, Americans themselves are actually not very good at being friends. There was a study published a few years ago in the American Sociological Review (a journal published by the American Sociological Association), according to which Americans on the average have fewer closer friends today than ever before. According to the study, a full 25% of Americans have no close friends at all. And the number of close friendships people who do have such relationships reported having dropped by half from 4 to 2 in the twenty-one years from 1985 to 2006. Moreover, even the quality of the friendships we do maintain has dropped over the years. C.S. Lewis, one of the few Christian apologist-authors whose work for some reason I don’t find off-putting, wrote this in his book, The Four Loves:

To the ancients, friendship seemed the happiest and most fully human of all loves; the crown of life and the school of virtue. The modern world, in comparison, ignores it. We admit of course that besides a wife and family a man needs a few “friends.” But the very tone of the admission, and the sort of acquaintanceships which those who make it would describe as 'friendships', show clearly that what they are talking about has very little to do with that philía which Aristotle classified among the virtues or that amicitia on which Cicero wrote a book.

I know what he means. I read Cicero’s book, called On Friendship, when I was in graduate school and was very impressed both by the clarity of his prose and, more to the point, by the picture of friendship he draws in the book. (Speaking of Cicero, have you all read the first two volumes in Robert Harris’s terrific, so-far-unfinished, trilogy about Rome in the age of Cicero? The first two books, Imperium and Conspirata, were two of the best pieces of historical fiction—and two of the best lawyer novels, to boot—I can recall reading in years. I loved them both and so will you! The third book is coming soon.) But I digress…and the portrait Cicero draws of friendship in his book is stunning. Friendship, he writes, improves the world because friends have the unique capability of making each other virtuous, of bringing each other’s nascent sense of virtue to the surface and to the fore. He writes with the deepest passion about friendship and the ways it provides the only truly suitable background against which people might conduct their family life in the foreground with dignity and purpose. Life, he suggests, without friendship is mere existence. I vaguely recalled this quote, which I managed to find on-line and which I think sums the concept up admirably: “All excellence is rare,” Cicero wrote, “and that moral excellence which makes for true friendship is as rare as any. On the other hand, it would be unreasonable and presumptuous for people to expect to find in their friends qualities which they themselves can never really hope to attain, or to demand from their friends an indulgence which they are not prepared themselves to offer. Friendship was given to us to be an incentive to virtue, and not as an indulgence to vice or to mediocrity! Although solitary virtue cannot scale the peaks of greatness, one may yet hope to do so with the loyal help of a comrade. And comradeship of this kind includes within it all that human beings most desire.”

Having a true friend is thus to be understood as one of life’s great boons. It is the platform on which greatness, so Cicero, rests, the springboard to the kind of virtuous living to which we all aspire but which none of us can quite attain on his or her own. More than love itself—another of life’s necessities, but one that speaks to the human need for passion and pleasure than specifically for virtue—friendship creates the context for life lived large and lived well.

And now we have the Facebook version, the kind that makes it possible not to need to spend a lifetime cultivating true friendship with two or three other souls one chooses as one’s intimates in the course of shared decades of moral and intellectual growth, but instead to “friend” thousands of people almost at once. Obviously, no one could ever have that many friends by following the old-fashioned model, but that is the beauty of the new concept: no matter how long ago you graduated from elementary school, those pals you played in the schoolyard with are still there, still out there somewhere in the ether, still available to you if only you choose to friend them and they you. That you have no real contact with them doesn’t matter. That you don’t really have any emotional ties to them is deemed irrelevant. The maximum number of friends you can theoretically have on Facebook is 5,000. But there are apparently ways to get around that restriction and there are reportedly people out there with hundreds of thousands of friends. I hope they don’t all expect birthday cards!

I believe that friendships, like most things, have natural life spans. The boys from my cabin at Camp Oakdale were true friends of mine when I was nine, ten, and eleven years old. (The camp closed after that and I, as must even eleven–year-olds under the right circumstances, moved on.) I don’t miss them. I suppose I’m mildly curious about where they are today, about what became of them. But the truth is that—to speak honestly—they’re just names from my past, albeit ones that evoke very pleasant memories. We’re not friends. Nor am I friends with the guys from my Hebrew School car pool. Nor with the boys with whom I shared my mercifully brief career in the Little League. Nor with all sorts of people I knew once, whose company I enjoyed, whom I thought of as my friends…and whom I haven’t seen in decades. I wish them well! But I don’t want to be friended by people who aren’t actually my friends. And I don’t need to be friended by people who actually are my friends. So who needs the whole thing? And that is why I don’t have a Facebook page!

I said I didn’t want a Kindle and now it’s my favorite toy. I said I didn’t want an iPhone and ditto. (I heard that! They’re both my favorites.) But I really don’t want a presence on Facebook. I certainly don’t want five thousand Facebook friends. I can barely keep up with the real ones I actually do have!

Thursday, December 8, 2011

Going to Elmont


Last week, I wrote about memory. And this week I propose to write to you about time. What is this, an undergraduate course in Too Big Ideas? (I suspect that question probably has more to do with the way I remember my undergraduate experience—for some reason I to this day cannot contemplate the nature of Being without reaching up to see if my mutton chops have grown back—than with anything else, but I want to write this week about time, not memory!) So let me start by asking out loud some questions most of us really haven’t asked ourselves since we really were back in college. What actually is time? Does it really exist? Or is it just something humanity has made up to help explain the universe, to impose order on events that would otherwise exist as discrete spheres of experience related to each other only by content and not by sequence or temporal proximity. Those really are questions only an undergraduate could love. But I had an experience last week that left me feeling outside time in a way that I’ve only occasionally experienced. And that’s what I’d like to write this week.

Regular worshipers at Shelter Rock have heard me say from the bimah many times, especially as we prepare for Yizkor, that time—the concept of “time” itself—is just a midrash. By that, I generally mean that we need to remember that the boundaries between time-past and time-present are more porous than we generally allow ourselves to imagine, that the historically dead are not necessarily the experientially dead, and that the ghosts are no less real for being unreal. I teach that lesson because it truly is reflective of my own experience of the world and the way I feel I have successfully—or at least semi-successfully—brought the evidence of my own perceptive consciousness to bear in deciphering the universe. The reality is that I miss my parents all the time. But the reality is also that they’re not quite as gone as I recall once thinking they were going to be. The French word for ghost, revenant, literally means “one who has returned.” I like that. The English “ghost,” is related to the German word Geist, meaning “spirit.” I like that less. And, in fact, the ghosts I’ve experienced in my life are far less relatable-to as spiritual constructs or as otherworldly metaphors than simply as revenants, as people who, turning out to be less done with the world than they (and their people) may well once have thought, come back for a brief—sometimes the briefest—return engagement.

One of the mysteries of my life was where my grandparents were buried. Both my grandfathers died before I was born, but I knew both my grandmothers. My father’s mother, though, died when I was only four years old. I remember her a little—her voice mostly, a little bit how her skin felt, plus some olfactory memory I can never quite pin down that must be related to some kind of perfume she liked to wear or to some kind of cooking or baking I associate with her for some by-now-long-forgotten reason—but not really much. But my other grandmother, my mother’s mother, I knew well. She lived in Bensonhurst. I’ve written before about watching the Verrazano Bridge being built in the course of innumerable Sunday visits to her home on 84th Street in 1963 and 1964. She died, regretfully, just before my bar mitzvah. That whole incident, I remember clearly. To say her death in February overshadowed my bar mitzvah in May is not exactly correct, but it’s not exactly incorrect either. (I only ran into her ghost years later, though…and at my actual simchah she was as spectrally missing as she was physically absent.) So she died that February during a teamsters’ strike and was brought to her grave in a rented station wagon rather than a proper hearse. (Isn’t it funny how you really do remember these things over the years, almost as though they really mattered!) But where that grave was, I had no idea. The funeral was somewhere in Brooklyn. The burial was somewhere else…but where that somewhere was I wouldn’t have known. How could I have? I was an upset little boy being schlepped along by events that any child would find at least mostly unfathomable. Nor did I ever undertake later on to find out exactly where that cemetery was.

If my mother visited her parents’ graves, I never heard about it. Or I never thought I did. (See below.) Probably she would have gone from time to time, but I was certainly never taken along. (My parents were a bit odd in that regard: a big part of their parenting concept was shielding me not only from death, but even from the reality of disease. My parents, both of them, even attempted—in this only semi-successfully—to shield me from the details surrounding my mother’s final illness. But I’ll write about that another time. Or maybe not, given how painful that whole sequence of events was for me then and still, at least in some attenuated way, is for me today even just to recall.) And then she died. If my father ever visited my mother’s parents’ graves after he became a widower, I never heard about it. Maybe he did. Maybe not. And then Joan and I left New York and were away for almost twenty years. Eventually, we came back. But by then whatever information my father had taken whatever information he had to impart on the matter to his own grave, yehi zikhro varukh.

And so I was left not only not knowing where my own grandparents were buried but also having no one to ask. My mother’s sister predeceased her, as did her brother-in-law. She herself had no contact with her father’s family, just as I eventually lost contact with hers. As a result, I knew no one at all who might have remembered. Plus, obviously, in the meantime a long time had passed. My grandfather died in 1948, my grandmother in 1966. We came back to New York in 2002. Even if I somehow was successfully somehow in resurrecting some sort of relationship with one of my mother’s first or second cousins—who’s to say that they would have remembered where my grandparents were buried? And how exactly was I going to find them anyway? Wisely or unwisely, I let the matter go.

And then, a few weeks ago, I found myself rooting around for some reason in my father’s papers and found my grandmother’s death certificate. (Why do I remember that my parents had tickets for the Broadway production of Peter Weiss’ play, Marat/Sade, for the night my grandmother died and that they never did get to see the play? Is that important?) I hadn’t known my father even had a copy, let alone that I did. I certainly hadn’t ever seen it or read it. But I read it once I found it…and found out all sorts of interesting things. For one thing, I learned what my great-grandmother’s maiden name was. (My grandmother’s mother was born Jennie Mehlman). But far more arresting was the detail at the bottom of the form that noted that my grandmother’s burial was to take place at…of all places…Beth David Cemetery in Elmont. I’ve been there a thousand times in the course of my years at Shelter Rock. Maybe ten thousand. It couldn’t be closer. In fact, no cemetery is closer to here, I don’t think. But who knew? Sometimes the challenge really lies more in knowing the right question to ask than finding its answer.

And so Joan and I set out to find my grandparents’ graves just last Sunday. After all those years of not knowing where to go, they were almost eerily effortless to locate. When I got to the office at Beth David, there was no line to stand on. When I asked the fellow at the window if he could locate my grandmother’s grave, he asked for her name and when she died. About fifteen seconds later, he was handing me a printed-out map of the cemetery with the location of my grandparents’ graves circled in blue ink. We got into the car, drove to the corner of Sinai and Wilson. (Someone, not myself, will eventually write an interesting essay about the names they give to streets in Jewish cemeteries.) The gate into the section owned, or at least once owned, by the Zembiner Benevolent Society was just where the man in the office said it would be. And there, in the row of graves furthest from the road were my grandparents’ graves.
This was last Sunday. I was twelve years old the last time I stood in that spot. The graves were tidy and neat, the yews trimmed and healthy-looking. (My mom must have paid for perpetual care, although there weren’t any stickers on the stones saying so.) I don’t know what I expected. I had hoped more of my family’s graves would be there, but it was just them. I had hoped the stones would say more about them, but they only note their names, the dates they died (weird that my grandfather died in 1948 on what would nine years later be the day Joan was born), and that they were loved by each other and by their children. (My grandmother was loved by myself as well, but I suppose there wasn’t room on the stone to go into that much detail. Or perhaps my mother and her sister were over-valuing the concept of the two stones having symmetric legends.) Nothing more. The phrase “Sinai and Wilson” sounded vaguely familiar to me. We had no friends or family in Elmont, but the expression “going to Elmont” also has a familiar ring to it—maybe that’s what my parents, zealous to a fault to shield me from the reality of death, called it when they actually did go visit my grandparents’ graves. What did I know? They certainly didn’t mean they were going to the track.

And so I was left with more questions than answers. Why the Zembiner Society? My mother once told me that she thought her father was born in Odessa. (Only in retrospect does it seem odd to me that she couldn’t say for sure where he was from.) Zembin, I’ve learned this last week, is a town in Belarus about forty miles from Minsk, the capital. Is that where my grandfather was from? (The Jews of Zembin, along with the Jews of nearby Borisov, were annihilated during the war. The www.jewishbelarus.org site doesn’t suggest that there is a Jewish community there today. And even if there was, would they still have records from so long ago? If my grandparents were alive, they’d both be 128 years old.) Or did my grandmother just buy burial plots from them after my grandfather died? And where are all the other graves—the gravesites of all my grandparents’ siblings, for example, or my great-grandparents’ graves? Joel and Jenny Kaufman, née Mehlman, lived on East 113th Street in Manhattan. But where they’re spending the rest of eternity, I have no idea. I suppose I should have asked at Beth David if they were in the data bank too. Next time!

The whole experience was, as noted, not what I had hoped it was going to be. (One thing I do know about the ghosts, however, is that they rarely show up where and when you’d think.) Still, I’m glad I went. There was certain peacefulness in that place, a certain sense of undisturbed-ness and permanence. I have my grandfather’s name as my middle name. Somewhere, I think I have his naturalization papers. I have quite a few of my grandmother’s paintings—she was quite a good artist—and, I think, some of her jewelry. The rest is all gone…but their graves aren’t gone at all, it turns out, as are also not their ghosts. This being real life and not a Hollywood movie, however, their ghosts just weren’t there haunting their own graves while they waited for over forty-five years for their sole living descendant (other than my children) to wander by for a visit. That part, I think, ghost story writers just made up.

Thursday, December 1, 2011

Memory and Honesty


Memory is the sea in which we spend our lives swimming forward (or at least: in which we spend our lives treading water), the context that gives meaning to the perceptive abilities we bring to bear in our efforts to decipher the world and grant meaning to what we see, hear, taste, smell, and feel of it. Moreover, an intact, healthy memory is considered the sine qua non of mental health itself: one of the chief hallmarks of mental illness is precisely the inability to distinguish between fantasies and “real” memories, between things we’ve once imagined happening and things that actually did occur, between dreamscape and landscape, between idle thoughts we really may once have had and events that really did once take place. No one has ever been charged with perjury for saying on the witness stand, “I got a clear, unobstructed view of their villainous faces when armed robbers burst into the Rite-Aid where I had gone after work to buy a toothbrush” when what that person, no doubt trying to speak honestly, can only really mean that he or she, while speaking under oath months later, remembers standing there in that drugstore on that fateful day and seeing what happened when the robbers burst into the place, their guns drawn and their larcenous intent all too obvious.

All that being the case, I was especially interested by an article that appeared in the paper the other day about the reasonableness of relying on memory in the adjudication of criminal trials. Perhaps some of you saw it as well. (If you are reading this on-line, you can find the article, written by Laura Beal and published in the Times last Monday, here.) It’s an interesting piece, and well worth your time, for a variety of reasons, but the detail that caught my attention had to do with the intersection of eye-witness testimony and DNA evidence: the author wrote, to my mind amazingly, that eye-witness testimony had been crucial in the cases of a full three-quarters of the 280 defendants whose convictions incontrovertible DNA evidence was subsequently instrumental in overturning. That is a detail that no citizen who cares about justice can feel good about passing quickly by.

Many of my readers know that I am a subscriber to the electronic newsletter of the Innocence Project, an undertaking founded in 1992 by Barry Sheck and Peter Neufeld as part of the Cardozo Law School of Yeshivah University but that now exists in its own right as a non-profit organization devoted to seeking justice for the incorrectly convicted. The numbers tell the story, and they are astounding: 208 convicted individuals exonerated, seventeen of whom had actually been sentenced to death. Collectively, and even more astoundingly, those 208 individuals served over 3,600 years in prison for crimes none of them had committed. For the record, the same DNA evidence that gained freedom for the wrongly convicted also led to the conviction of the real criminals in 125 of those cases. So this particular sword cuts both ways! But, as satisfying a thought as that may be (and surely is), I find myself far more profoundly drawn to the statistic mentioned above: that the convictions of a full 75% of the 208 individuals who were subsequently exonerated were based on eye-witness testimony that was apparently incorrect and untrue. That, I believe, is a number that all Americans who care about justice should pause thoughtfully to consider.

There is a strong racial element in play here as well: in forty percent of the cases involved, the incorrect testimony involved a witness misidentifying an accused individual of a different race. There could, I suppose, be malice involved—and it is hard to imagine a more pernicious form of non-violent racism than choosing to lie about someone on the witness stand solely because of that person’s race—but my suspicion (and the opinion of the Innocence Project as well) is that something far more subtle is afoot here: people are simply less good at recognizing people of other races than they are at recognizing fellow white people or fellow black people or fellow any kind of people who are whatever it is they themselves are. The “other” is weird, strange, a bit unrecognizable. The old canard that all “those” people look alike is derogatory and insulting, but it’s not entirely untrue: people of distinguishable races and ethnic groups apparently do tend to look far more like each other in the eyes of outsiders than they do to each other. You can read more about the Innocence Project on their website at www.innocenceproject.org, but the detail I want to focus on here is the one relating to the reliability of eye-witness testimony.

Our Torah clearly understands that eye witnesses can simply be wrong. And, indeed, the Torah seems to address the issue through ancillary legislation designed to eliminate the possibility of error. Torah law, for example, specifies that no one may ever be convicted on the basis of one sole eye witness and that any such witness may only speak in court if the court has determined in advance that, at the very least, a second witness will testify to having seen the same thing. (The Torah laws prohibiting defamatory speech are set aside for witnesses testifying in court. But they are specifically not set aside if it is clear in advance that such testimony cannot possibly lead to a conviction.) Furthermore, there are two different kinds of questions which witnesses must answer, one set related to the specific details of the crime in question and the other related to what moderns would label as circumstantial details: not what the accused was seen doing at some specific place and time, but what color sweater she or he was wearing at the time or what kind of shoes. It is true that there is some extra leeway with respect to the questions we would label circumstantial: if either witness or both witnesses say that they do not know the answer to one of the circumstantial questions, the court may still consider their testimony. But if either witness cannot answer a specific question relating to the time or the place of the alleged incident, or to the identity of the person seen doing the thing the accused stands accused of having done, then such testimony is discarded as invalid and the trial cannot proceed unless new eye witnesses can be produced.

Furthermore, the Torah understands that people get it wrong all the time and so introduces the concept of hatraah, a kind of decisive check on the system that more or less guarantees that no one will ever be convicted falsely. Hatraah means “warning,” and the concept is simple enough: the witnesses must also testify to the fact that they personally warned the accused of the consequences of his or her actions. It sounds simple, but what the Torah is really saying is that we do not rely simply on eye witnesses claiming to have seen something, that they must personally have experienced the kind of relationship with the accused that will subsequently make it almost impossible, assuming their probity, for them to give false testimony. Rambam explains how the whole warning thing works: “How is a warning administered? They say to someone, ‘Desist, for the action you are about to undertake is a sin and you will become liable to be executed by the court’ or ‘Desist, for what you are about to do is a sin and you will be severely punished if you are convicted’ And then the person to whom they are speaking must acknowledge their words and say something like ‘I know, but I am going to proceed to commit this act nevertheless.’” And even that is not enough: according to Rambam, the individual being warned must then commit the act in question almost immediately. If more time passes than would be necessary for one average citizen to greet another, then the accused can only be convicted if it can be demonstrated in court that he or she was warned a second time.

For good measure, eye witnesses who insist that they knew the accused well enough to identify him or her and that they personally administered the requisite warning must then be formally threatened with having to bear the responsibility for the execution of an individual falsely convicted because of their incorrect testimony for the rest of their lives, and that it is not going to be solely the life of the accused for which they will bear responsibility but also for the lives of all of his or her now-to-be-unborn descendants.

So how many people do you imagine were ever convicted under such a system? The answer is…who knows? It’s mentioned in the Mishnah that any court that executes more than one person every seven years is to be condemned as one excessively willing to convict. As great a luminary as Rabbi Akiba is cited as having remarked once that he couldn’t really understand how anyone at all could ever be convicted under the Torah’s system. Nor is it entirely obvious that the Jews under Roman rule had the right to execute anyone anyway. But all that is beside the point, which is that our tradition clearly understands that even the eye witness testimony of the most reliable and honest individual can be fatally flawed. And its remedies—requiring always more than one witness, insisting that the witnesses not only testify that they saw the accused but that they actually knew him or her, demanding that the witnesses be able to say in court that they spoke to the accused and know for a fact that he or she was acting with malice aforethought—are clearly based on the assumption that even the most well-meaning person can simply be wrong. So you see how infuriating it is to me, and should be to us all, when someone announces that the Judeo-Christian tradition (whatever that is) supports the notion of capital punishment as it is carried out in our country. Yes, of course, our Torah demands that death be meted out for serious crimes and sins. But the detail that is rarely mentioned is that the same Jewish tradition would never countenance the conviction of anybody at all based solely on circumstantial evidence or the eye witness testimony of a single individual.

The sages of ancient times would have loved DNA. And they would especially have loved the Innocence Project, with its insistence on the basic unreliability of eye witness testimony that has not been shorn up so unassailably that the chances of it being incorrect are almost nil. That was what our rabbis were trying to do in ancient times with all the requirements they found hiding in Scripture just behind the texts that appear blithely to be decreeing capital punishment for all sorts of grievous wrongdoing. And it is what DNA testing is able to do today in a far more comprehensive manner. I can’t imagine the ancients wouldn’t have embraced the concept had they been able to imagine it, because in the end, the underlying principle is the same: to find someone guilty in court, the evidence has not merely to be compelling, but—to the greatest degree possible—incontrovertible.

Thursday, November 17, 2011

Looking Away

Like all of you, I’m sure, I’ve been reading with some combination of horror and lascivious fascination about the scandal surrounding the arrest last week of Pennsylvania State assistant football coach Jerry Sandusky. I do not wish to write about the specifics of the case against Sandusky, however. For one thing, I have nothing to add to what everybody already knows, which is what has been repeated endlessly in the newspapers and on television and the radio in the course of the last week. Nor, as you all know, do I ever have any interest in looking past the civic obligation we all share to grant the presumption of innocence to people who have not actually been found guilty in a court of law. What I do wish to write about, however, is a feature of the case against Sandusky that actually has nothing at all to do with him personally.

Unrelated to the question of the guilt of the accused is the question of the behavior of all those others who saw evidence that he was guilty, or who thought they did, and who either did nothing at all about it or else contented themselves with passing the buck along to someone else who ultimately did nothing about it. Sandusky was arrested and charged with forty counts involving the alleged molestation of eight boys over a fifteen-year period. Are there more children involved who simply have not yet come forward? There’s no way to know if there are, or if any of those theoretical other children will now come forward, but the more interesting question to ponder is how abuse on this scale could take place—none of it behind locked doors and most of it in public space in a facility open to staff, students, and visitors alike—without anyone acting decisively to put an end to it. (The investigation that led to the grand jury indictment was undertaken only after the mother of one of the boys came forward to report her son’s abuse after it had been going on for three years. But that boy’s experience was recent compared with what the police now believe happened to some of the other alleged victims.)

The grand jury testimony is beyond chilling not only in terms of the horribleness of what the children involved allegedly experienced, but also in terms of what the story implies about human nature itself. A janitor walks into the shower room at Penn State in 2000 and sees what he takes to be the sexual assault of a child in progress before his eyes. He reports the incident to his superior, as he was told he was supposed to do in such a situation, but the superior in question does nothing at all, failing both to inform the police and to bring the charge to the attention of other school officials. Two years later, a graduate assistant walks into the same shower facility and sees what he too takes to be the rape of a child of about ten years of age in progress. He duly reports the incident to the athletic director of the facility, as he had previously been instructed to do in such an event, but the incident is never reported to the police. Nor does the athletic director bother to inform his own higher-ups. Procedure is followed, at least to a certain extent. But nothing at all happens to safeguard the children who come to that facility to enjoy a day of sports and competition. The matter is eventually buried, forgotten. The world keeps spinning. No one knows. And no one seems to care either.

Let’s put ourselves in the picture. We see something that appears incredibly wrong. We could be wrong about what we think we’ve seen, but we have no more reason to think that any more than any of us ever doubts what we see with our own eyes and our brains experience no difficulty deciphering or interpreting. Nor are we expected to go to law school and only then decide how or whether to proceed. Indeed, the specific legal question of whether the person we believe that we saw behaving poorly is guilty of an actual crime is hardly our call anyway—in our great land, people are found guilty by juries of their peers or by judges trained in the law, not by bystanders even if they walk in on them in flagrante delicto—but we surely understand that something very wrong is going on. And yet we either do nothing at all or else feel done with the matter once we report it, even though we understand perfectly well that nothing has happened, that the person we saw with our own eyes behaving incredibly poorly and endangering the welfare of young children is still at it, still hanging around, still bringing boys into the facility where he is apparently free to behave as he wishes. But, having technically complied with the instructions in some rule book, we allow ourselves to overlook the fact that nothing has actually happened to prevent the perceived offender from re-offending.

To speak about the Sandusky case itself for a moment, I suppose it’s possible that all these eye witnesses were wrong, that they thought they saw something very wrong but were simply misinterpreting what was nothing more than good natured, if excruciatingly vulgar, horsing around. I’m sure Sandusky’s lawyers will attempt to depict the allegations in just that light, but that is precisely my point: it is the job of the police to investigate allegations of misconduct and then to decide if the allegations are credible or not. And it is the job of the district attorney to determine if the behavior in question constitutes a crime of which the accused can actually be indicted. And it is the job of the grand jury to weigh the evidence and then either to return an indictment against the accused or not to return one. But the original witnesses—the individuals who saw with their own eyes what they had no difficulty understanding or deciphering—cannot feel morally done with the matter once they see clearly that nothing has happened to halt the abuse. And what about the boys’ parents? Is it possible they were all completely unaware of what had befallen their sons? I suppose it is possible to look and not to see, but it still seems incredible that the boys’ doctors, their teachers in school, their parents, the parents of their friends, their friends themselves, their principals, their clergy people, their neighbors, their coaches, their guidance counselors—that no one at all noticed the pain, the fear, the emotional distress, or any sign at all that something horrible had happened. Yet no one at all spoke up for, so the indictment, fifteen long years.

The whole question of people—and apparently lots of them—being capable of looking past one of the most heinous of crimes and doing nothing at all about it is the aspect of the case that calls out to me. I was eleven when Kitty Genovese was murdered on Austin Street, just a few blocks from my parents’ apartment house in Forest Hills. I was only in fifth grade at the time, but I can easily recall the brouhaha that followed once it became clear—or at least once it was widely believed—that dozens of people would necessarily have heard that poor woman screaming and yet chose to do nothing to help her. There has been a lot of debate over the years about what actually happened—although it appears to be basically true that she screamed for help repeatedly and no one phoned the police or came out into the street to offer her any assistance—but the whole incident somehow became emblematic of the ability of people simply not to hear what they do not wish to hear, not to see what they will only complicate their own lives by seeing, and not to feel responsible for actions that no one could credibly describe as any of their business.

For Jews, of course, this is an old story. There is a beautiful, tree-lined avenue at Yad Vashem on which each tree honors one of the righteous non-Jews who put his or her life on the line to save Jewish lives during the Shoah. It is a stirring place to visit, but, like all trees, these too cast a shadow—in this case on the vast majority of Europeans living under Nazi rule who were capable of looking on from afar as their Jewish neighbors were degraded, deprived of even their most elemental civil rights, and then eventually either murdered or deported to their deaths, yet who had it in their hearts to do nothing at all to help. In the New York Times the other day, David Brooks took his readers to task for allowing themselves smugly to assume that they would have necessarily have behaved better if they were in Joe Paterno’s shoes or in assistant coach Mike McQueary’s, that they would never have had it in them to look away when children were being abused, that they would have done the right thing if they had been living on Austin Street the night Kitty Genovese was raped and murdered. Brooks’ point, well made and well argued, is that none of us can say with certainty how we would behave in such a situation, that there are people who step up and do indeed do the right thing and other people who simply do not…and that none of us can know with absolute certainty in advance to which group we will belong until the opportunity to speak out or not to speak out actually presents itself. (You can find David Brooks’ essay, called “Let’s All Feel Superior,” here. It’s a good read and I recommend it to you.)

But saying that none of us can say with certainty how we would behave in such a situation and then leaving it at that is hardly enough. As I said before, I have no way of knowing in advance what the outcome of Jerry Sandusky’s trial is going to be and I see no point in behaving as though I do. But I do think we could all profit by taking the backstory to heart and asking ourselves whether we have earned the right smugly to condemn all those who could have stepped forward over a decade and a half and yet who found it in their hearts to remain silent and to do nothing…and, since we’re asking unsettling, stress-inducing questions, also by asking ourselves how sure we are that we would have earned the right to be honored with a tree at Yad Vashem when exerting ourselves to save a Jewish child would have put our own children’s lives at risk. Those questions too, of course, have no answers. But asking them of ourselves can itself be a salutary exercise: to grow morally throughout the years of our lives, we need consistently and repeatedly to look out at the world and, instead of taking smug satisfaction in condemning those who appear to have behaved disgracefully, asking ourselves if we truly know our own mettle…and then, once we admit (as we all must) that we do not, by then asking what we are doing constantly to grow spiritually and ethically so as to guarantee that no one will ever say of any of us that we had the opportunity to do good in the world but simply looked away.

Thursday, November 10, 2011

Throw-Away Children

Earlier this week, the Supreme Court agreed to hear two different cases, both of which are predicated on the argument that sentencing minors—in both of the cases at hand, young teenagers—to lifetimes in prison without the possibility of parole constitutes precisely the kind of cruel and unusual punishment prohibited by the Eighth Amendment to the Constitution.

The background to the decision to hear these cases is instructive and has mostly to do with the 2010 Supreme Court ruling in the case known as Graham vs. Florida, in which the justices concluded that the clause prohibiting cruel and unusual punishment “does not permit a juvenile offender to be sentenced to life in prison without parole for a non-homicide crime.” In other words, the court held that sentencing a juvenile to die in prison—which is the more stark way of saying that someone is sentenced to life without the possibility of parole—is unconstitutional if the crime of which the young person in question was convicted was not murder. The number of juveniles convicted of the kind of non-homicide crimes that resulted in that kind of sentence—rape, armed robbery, and kidnapping—is about 130. Those, however, are the small minority of prisoners in our country who were convicted as minors and sentenced to death in prison: if you include those found guilty of some form of homicide, the number rises to over 2,000 prisoners. Moving incrementally forward, the Court has now agreed to hear cases deemed representative of the about seventy people (out of that 2,000 figure) who were under the age of fourteen when they committed the crime that led to them being convicted of some version of homicide and then given sentences of life-long incarceration. And the Court has, I assume intentionally, chosen to begin with two cases that involve minors who themselves did not actually kill, but whose deeds were deemed participatory in murder. (I should make it clear that the Court’s earlier ruling does not imply that any young people currently serving life sentences without the possibility of parole is necessarily going to be retried or resentenced, let alone have his or her conviction overturned. All it means is that the young people in this category must at some point be given some chance to show that they have matured enough while in prison to warrant being permitted “to rejoin society” rather than staying in prison for the rest of their lives. What is on the table now is the question of whether the homicide exclusion is constitutional.)

In one of the cases the Court has agreed to hear, Jackson vs. Hobbs, the defendant, a fourteen-year-old named Kuntrell Jackson, participated in the robbery of a video store in Arkansas in 1999 during the course of which one of the other robbers shot and killed the clerk working in the store. In the other case, one called Miller vs. Alabama, a fourteen-year-old named Evan Miller and an older friend, both of them drunk and high on marijuana, administered a beating to a neighbor in 2003. They then set his house on fire, as a result of which the neighbor died of smoke inhalation. These are horrific crimes that resulted in the death of innocents. Surely, society cannot look the other way when citizens are murdered in the places of employment or in their homes merely because the persons responsible for their deaths did not specifically set out that day to kill anyone. I can’t imagine anyone disagreeing with that thought, and yet I find myself strangely unsure about how I feel about the actual matter before the Court.

Just to provide a bit more background, the Supreme Court has already ruled that the death penalty may not be imposed on minors. (The original decision, making it illegal to sentence juveniles under sixteen years of age to death dates back to 1988. Then, in 2005, the Court determined that no one under eighteen may receive the death penalty.) But it is specifically not the death penalty that is under discussion here.

On the one hand, fourteen-year-olds are children. Our bar- and bat-mitzvah boys and girls are only slightly younger. Many need to stand on the box we keep on the bimah when they deliver their remarks so they can see over the lectern. They aren’t even in high school yet, and so it seems impossible to imagine them or other young people their age already beyond redemption, already so far beyond the pale of normal and rational behavior that the only reasonable way for society to deal with them is to lock them up forever and then to toss away the key. Would I feel differently if it was my brother who died in that house fire, or if it was my dad working in that video store? I’m sure I would, but that, of course, is precisely why we do not invite the brothers or sons of homicide victims to serve on the juries considering the fates of their relatives’ alleged murderers.

Where should Jewish people stand on an issue like this? On the one hand, our tradition is strongly in favor of using the justice system to make society safe. (And people—of any age—who participate in armed robbery or who set other people’s homes on fire are clearly dangerous and violent people who make society the precise opposite of safe.) On the other hand, our tradition is adamant that the gates of teshuvah, of repentance, are always open…and that there is no one at all who cannot turn around, who cannot renounce sin (and crime), and, through the sheer force of will and the power of faith, become a God-fearing person from whom society has nothing at all to fear. Of course, there is no way to know in advance who will choose that path and there are no guarantees, nor could there ever be, that even defendants who appear the most docile and chastened in the course of their trials will actually make the decision to abandon their evil ways and embrace goodness. If guarantees like that were possible, then that would make the administration of justice a whole lot simpler! Seen in that light, the question, really, before the court is not whether violent criminals are bad people, but whether we are prepared, as a society, to say that there simply are children—and fourteen-year-olds are children no matter how horrifically poorly they behave—of whom it can be reasonably said that there is no reasonable hope for them to grow past the badness of their earlier years and end up as law-abiding citizens from whom society needs to fear nothing at all. Isn’t that what we are saying to a fourteen-year-old to whom we deny even the possibility of parole: that there is no possibility of you ever growing past the out-of-controlledness of your youth and therefore no possibility of society ever no longer needing to incarcerate you. And, if that really is the case, then why shouldn’t we just say so out loud and save you from a lifetime of hoping in vain for mercy that will never come?

Our Torah, towards the end of Deuteronomy considers the case of the rebellious son. In my own translation, the passage reads as follows: “If someone should have a violent and rebellious son who does not listen to his father’s instructions or to his mother’s, then, assuming they have attempted to discipline him and he still refuses to obey, his father and mother should seize him and bring him…to the gates of their hometown. And there shall they say to the elders of his city, ‘This son of ours is violent and rebellious, he does not listen to our instructions, and he is a glutton and a sot.’ All the inhabitants of his city shall then stone him with stones until he dies, and thus shall you eradicate evil from your midst so that all Israel hear and become chastened” (Deuteronomy 21:18-21). That sounds gruesome enough and, indeed, all those people who can’t find enough reason to hate religion and mock its teachings naturally can’t get enough of this passage. But for us, the question isn’t really what the Torah says, but what it means. And that requires considering the oral traditions that go along with a passage like this, traditions generally ignored entirely by people eager for any pretext to heap abuse on the Bible.

The most convenient place to find the laws relating to the rebellious son catalogued is in Maimonides’ Mishneh Torah. (The more precise place to find them in that encyclopedic work is in the seventh and final chapter of the section called Hilkhot Mamrim, the Laws Pertaining to the Rebellious.) And there we find the beginning of our answer regarding the proper Jewish response to the matter before the Supreme Court. For one thing, the laws pertaining to this rebellious son are so restrictive that it’s easy to understand where Rabbi Simeon was coming from when he declared that this law was never actually used to convict anyone at all and that it appears in Scripture merely to teach a profound lesson about the importance of obedience towards one’s parents. (His opinion is found in Tractate Sanhedrin on page 71a.) The son in question, for example, cannot be a minor—he must already have reached the age of commandments—but he also can’t be a full-fledged adult (because then why would the Torah reference him as a “son”?) So he must be thirteen, but not yet fully physically mature—and no boy exists in that state, so Rambam, for more than three months. On top of that, tradition declares that he must actually have stolen money from his parents and used the stolen funds to purchase meat and wine. (Not that many twelve year olds have a taste for wine, so that law represents a serious narrowing of the law’s applicability. But the Torah specifically requires that his parents condemn him as a glutton and a drunkard!) If such a precocious lad is located, then he must have eaten the meat and drunk the wine outside of his father’s house, but not by himself either. Instead, he has to have consumed the goods purchased with stolen funds in the company of hooligans and ruffians. The meat itself must be eaten raw, but not entirely raw—thus slightly cooked—and the wine must have been thinned with water before being drunk. If the meat wasn’t kosher, the law doesn’t apply. If he consumed the forbidden feast on a fast day, the law doesn’t apply. If he consumed the food at a feast connected somehow with the performance of the commandments, for example at the feast following a bris or at a wedding, the law doesn’t apply. If the value of the food is less than fifty dinarim, or if the meat alone was not worth that much, the law does not apply. If his parents forgive him, the law does not apply. Nor does the law apply in any way to daughters.

You get the picture. No wonder Rabbi Simeon wondered how anyone could ever be convicted! To be fair, a different rabbi, Rabbi Jonathan, is cited in that same passage as mentioning that he himself once visited the grave of such a boy. But, realistically speaking, how many such children could ever have been executed. My guess is none at all. Or the one whose grave Rabbi Jonathan came across and no more. The bottom line is that, despite the unimpeachable sanctity and authority of Scripture, the rabbis could not imagine that the simple meaning of the text—that inveterately rebellious children should be given up on and suitably and permanently punished—could be the whole story. They accepted the text as sacred. But, by focusing the law through the prism of their own moral consciousness, they found the courage to take the text as hyperbole intended solely to warn the faithful against taking a cavalier attitude towards the respect and obedience we owe our parents. In other words, they simply could not imagine that a boy of thirteen and a few months could possibly be so irretrievably bad that the only rational response to his poor behavior would be to end it violently and permanently by taking his life. And these were hardly people, our ancient sages, who treated the words of the Torah lightly!

I think the Supreme Court is on the right track. So were our ancient sages. Executing children is barbaric and wrong. But throwing them into prison and supposing that nothing—no amount of counseling, no amount of maturing, no amount of teaching, no amount of exposure to positive, moral role models, nothing at all—could ever help a troubled, violent boy or girl turn into the kind of adult who could live constructively and peacefully in the world, that seems to me to be as wrong an idea as anyone ever had. Would I feel differently if it was my dad who got shot in that video store? I’m sure I would. But I would be wrong. There are no children worth throwing away.

Thursday, November 3, 2011

Mormons and Jews


A Public Religion Institute Poll released last Friday indicated that less than half of registered voters could identify Mitt Romney’s religion correctly. The number of Americans overall who could say that Romney is a Mormon was even lower. (The numbers were 49% and 42%, respectively.) That would seem to suggest that, should he get the nod, the governor’s religion will not be a major factor in the race. Nonetheless, I do not believe that will be the case.

That, of course, isn’t to say that most of us won’t wish it to be so. My sense is that an overwhelming majority of Jewish Americans would easily support the notion that a candidate’s personal religious beliefs should be as irrelevant as skin color or ethnic origin when it comes to deciding which candidate is the most worthy. That, however, is clearly not the view of a significant portion of non-Jewish Americans. A Gallup poll from a few weeks ago came up with the result that a full 20% of Republicans would not vote for a Mormon candidate no matter how otherwise qualified he or she might be. The Reverend Robert Jeffress, an evangelical pastor who leads a gigantic mega-church in Dallas, made a huge stir last month when he declared that in his opinion Mormonism was not even a real religion, just some sort of cult, and that its members were kidding themselves if they thought of themselves as Christians. Rick Perry, Romney’s chief contender and the candidate the reverend is backing, politely distanced himself from that statement. But there’s no question that lots of evangelicals agree with the pastor. The Southern Baptist Convention, the largest Protestant church body in the United States and the second-largest Christian organization in the United States (only the Catholic Church is bigger), has officially labeled Mormonism a cult as well. And the reverend was probably quite right when he responded to his critics, and they were legion, by referencing the Southern Baptists’ stand and adding that, in his opinion, “there are a lot of people who will not publicly say that's an issue because they don't want to appear to be bigoted, but for a lot of evangelical Christians this is a huge issue, even if it's unspoken.” For better or worse, I think he’s probably got that exactly right. (If you’re not sure how far some anti-Mormons are prepared to take this, take a look at www.exposemittromney.com and you’ll see what I mean.)

If we exclude information gleaned from watching The Book of Mormon or Angels in America on Broadway, most Jewish Americans know almost nothing about Mormonism. And most Jewish Americans don’t live anywhere near Broadway anyway! What many of us have heard about, and find beyond perverse, is the Mormons’ creepy custom of posthumously baptizing Shoah victims, thus making them—in their own minds only, of course—into ex-post-facto members of the Mormon faith. Correctly called “vicarious baptism” or “proxy baptism,” the practice—condemned by every other major Christian denomination, including the Catholic Church—involves baptizing a living person on behalf of a deceased individual and the Mormons have been doing just that since 1840. The practice, however, is not limited to the martyrs of the Holocaust. Other prominent Jews—including Maimonides, Irving Berlin, and Albert Einstein—have apparently also been baptized posthumously. Nor is the practice limited to Jews: just last year it was revealed that, of all people, President Obama’s late mother was posthumously baptized by Mormons acting wholly on their own. She thus joins Heinrich Himmler, George Washington, and Christopher Columbus in club of unwitting and unwilling after-the-fact Mormons. The whole thing is so patently ridiculous that it is hard to know whether the more rational response should be anger or incredulity. To their credit (and in response in no small part to the Jewish community’s outrage), the Mormons claim to have stopped the practice in 1995. And just last September they agreed to remove the names of all previously posthumously baptized Jewish Shoah survivors from their rolls.

The opinion Jewish Americans have of Mitt Romney should, of course, be a function of his record in the world of business and as governor of Massachusetts from 2003 to 2007. I wish today, however, not specifically to write about Mitt Romney at all, but about another Mormon, one who seems to have been long forgotten by everybody but who showed uncommon insight and bravery in standing up for Jewish interests when the rest of the world noted the devastation wrought by the Nazis on the Jews of Germany on Kristallnacht, which occurred seventy-three years ago next Wednesday, and then quickly looked away. The man’s name was William Henry King, and he represented Utah in the United States Senate from 1917 to 1941. He was also president pro tempore of the Senate in 1939-1941, which put him third in line to succeed to the presidency should the president have become incapacitated. (Amazingly enough, he was not the only William King ever to serve as president pro tempore. William R. King, our country’s shortest serving Vice President, was president pro tempore of the Senate from 1836 to 1841 and then again from 1850 to 1852.)

By all accounts, Kristallnacht was the end of the beginning of the Shoah, the event that, at least in retrospect, serves as the watershed moment after which nothing was ever again the same for the Jews of Germany. In the course of one evening of terror, over 1600 synagogues were ransacked. Hundreds more were burnt to the ground. Countless Jewish businesses and shops were destroyed. Ninety-one people were murdered in the course of one single evening and over 30,000 Jewish men were arrested and carted off to concentration camps. (Of them, more than two thousand died as a result of the brutal treatment to which they were subjected and the rest were forced as a condition of their release to agree to leave Germany.) For those of us looking back on the horror after all these years, it seems impossible to imagine the world not finally awakening to the demonism that had seized Germany and responding dramatically and forcefully.

Mostly, the world yawned. No economic sanctions were put into place against Nazi Germany. America recalled its ambassador to Germany briefly as a kind of formal protest against the ferocity of the pogrom, but diplomatic relations were not severed. Nor were immigration quotas in the free world relaxed to permit the Jews of Germany to escape to freedom. Even something as innocuous and deeply humanitarian as the Wagner-Rogers bill, which would have allowed 20,000 Jewish children to come to the United States outside the quota system, was opposed by FDR and eventually died in committee. As chronicled just this week by Rafael Medoff in an op-ed piece published on the website of the Jewish Telegraphic Agency, Christian America was equally unmoved. (You can find Medoff’s very interesting essay here.) And then there was William H. King. Arguably the most powerful Mormon in the United States at the time, King chastised FDR for recalling our ambassador “for consultations,” correctly understanding that the Germans would understand the move as little more than a slap on the wrist. And then, when President Roosevelt—as unmoved by the events of Kristallnacht as he was apparently uninterested in risking his own political capital by moving aggressively even to rescue children from the Nazis—coldly noted in public that a revision of American’s immigration quota system was “not in contemplation,” King responded by suggesting that Alaska be opened up entirely as a haven for Jewish refugees.

Why don’t I know about that? I read Michael Chabon’s novel, The Yiddish Policeman’s Union a few years ago and liked it less than I had hoped I was going to, yet I somehow missed the fact that it was rooted in history rather than solely in the author’s imagination. It turns out that there was indeed such a proposal. Called the Slattery Report, it was named for Undersecretary of the Interior Harry A. Slattery but produced at the behest of Secretary of the Interior Harold Ickes. And it specifically proposed that Jewish refugees from Germany and Austria, then the extent of the Reich, be permitted to settle in four specific locations in Alaska. (The concept was that the quota program could be legally sidestepped in this specific way because Alaska was a territory of the United States, not a state.) In retrospect, it seems like a zany sort of response to Kristallnacht, but perhaps that is only how it seems this long after the fact. At the time, it had the support of an interesting range of religious organizations, including the Labor Zionists of America, the American Friends Service Committee, and the Federal Council of Churches. It could surely have made a difference in the fate of countless European Jews. And it was none other than William King who introduced the bill into the Senate. (Representative Frank Havenner, a Democrat from California, introduced the bill into the House of Representatives.) But without Roosevelt’s support, this bill too was buried in committee and never again saw the light of day.

Still, as people line up to decide whether or not Mitt Romney’s religion should impact on voters’ decision whether or not to support his bid for the presidency, it might be worth considering that when the Jewish people was facing the darkest of hours in its history, a concrete, dramatic, and eminently doable plan to save countless European Jews was introduced into the Senate by a Mormon. When America’s leaders could not bring themselves to act decisively even to save children, King took the Slattery Report and put his considerable authority behind it, proposing it be enacted into law. It came to nothing at all. Obviously, it hardly make sense to support or not to support Mitt Romney because of something William King did more than seventy years ago. But when I think of Mormonism in general—and I haven’t seen the musical, although Joan and Max, my oldest, did—I find myself able to look past the nuttiness of posthumously baptizing Anne Frank—a practice I believe even the Mormons themselves must now regret—and remember instead the fine and noble example set for us all by William Henry King, a brave man for whom the notion that it was “not in contemplation” to act decisively to save the Jews of Europe was reason enough to act outside the boundaries of political loyalty and to go up against his president to do the right thing.

Friday, October 28, 2011

Advice for the 99%


Like many of you, I suspect, I’ve only been vaguely aware of the Occupy Wall Street people camped out in Zuccotti Park for the last six weeks or so. I don’t even think I knew where Zuccotti Park was until just recently. (It’s in Lower Manhattan between Broadway, Trinity Place, and Liberty and Cedar Streets. It was called Liberty Plaza Park until 2006, but I don’t believe I knew where that was either.) Maybe some of you know all about it, but I certainly cannot agree with Douglas Rushkoff, the CNN columnist who wrote the other week that, at least in his opinion, “anyone who says he has no idea what these folks are protesting is not being truthful.” Yet, if there’s one thing Jewish history has taught me, it’s never to shrug off large numbers of unhappy people demonstrating in the street as irrelevant or unimportant merely because they don’t overtly appear to have anything to do with me personally. And so I set myself to attempting to figure out what this is all about. (And the numbers are not inconsequential. A few weeks ago there were, by police estimates, 15,000 people in the park. And they were joined by tens of thousands of others demonstrating across the globe in places as diverse as Auckland, Sydney, Hong Kong, Taipei, Tokyo, São Paulo, Paris, Madrid, Hamburg, Berlin, Leipzig, Frankfurt, Phoenix, Chicago, and Minneapolis. Were we to understand that these were all spontaneous demonstrations of support for the people in Zuccotti Park? That was how the media depicted the demonstrations…but that too caught my attention. What, I asked myself, could possibly have inspired all these people simultaneously to mount the barricades? Or is that just the kind of question someone would ask who neither has a Twitter account nor understands why anyone would want to have one? Maybe that is just how the world works now! But can someone really tweet and get tens of thousands across the world to respond? Apparently!)

As far as I can tell, the single theme that unifies all the protesters has to do with the unequal distribution of wealth in our country (and, for that matter, in every other country). The slogan “We are the 99%” references the claim, endlessly repeated in the media, that the wealthiest 1% of Americans earn 24% of our nation’s income. (I keep hearing the statistic quoted as 40%, but I believe that not to be correct.) It’s hard to know what to do with that number, however. By any measure, 24% of the earned income of Americans in any given year is a huge amount of money. But it’s not that dramatically different from how things have been historically in our country: in 1915, the year of my mother’s birth, the richest 1% of Americans earned 18% of the nation’s income. Still, it’s easy to amass statistics and difficult intelligently to analyze them. In 2010, for example, the wealthiest 20% of our nation’s citizens earned 49.4% of the nation’s income, while the poorest 15% of Americans earned 3.4%. (Those 15% of Americans are, not coincidentally, those who live beneath the poverty line.) Is that more or less significant than the 1% earning 24%? And yet another way to view the disparity between wealthy and poor is to observe that the mean after-tax income of the nation’s wealthiest 1% rose 176% between 1979 and 2005, compared to a growth over those same years of just 6% for the poorest 20% of Americans. Is that a sign of the degeneracy of American culture? Or is impressive that even the poorest are better off now than a quarter-century ago?

I am not an economist and I therefore find it difficult to find these statistics as arresting as the demonstrators clearly expect me to. Do the demonstrators have a point that things have changed in our country, and for the dramatically worse? Or is that just how things have always been, that rich people with plenty of money to invest have the means to make even more money while the poor—who spend their income on groceries and rent—do not. In a sense, the real question behind all of this is whether the disparity between rich and poor is itself a societal evil we should be working to eradicate or whether it is the specific width of the chasm that separates the wealthiest from the least wealthy Americans that we should be finding outrageous (i.e., and not the simple fact that such a chasm exists at all). Among Christians, Jesus’ comment that there will always be poor people in the world (found at Matthew 26:11, where Jesus justifies someone wasting a bottle of expensive perfume to anoint his head even though the perfume could have been sold and the money given to the poor by observing that there will always be poor people no matter how much charity anyone gives away), has been used over and over—especially just lately, it seems to me—to justify the right of the wealthy to live lives of luxury when others have almost nothing. I suspect that’s a misreading of Christian tradition, but what do I know? It’s hardly my place to critique other people’s gospels…but instead I’d like to challenge myself here to say what our Torah actually does teach about the fact of income disparity. It’s not as simple a question as it sounds as though it should be!

On the one hand, our Torah seems clearly to suppose that poor people will permanently exist within Israelite society. Many laws, in fact, seem naturally to presuppose such a situation. A law in Exodus regarding the sabbatical year, for example, specifically discusses the land that must be left fallow every seventh year and, addressing the Israelites, says that they must “let it rest and lie still so that the poor of your people may eat…and so shall you deal also with your vineyard, and with your olive trees.” Another law, also in Exodus, cautions judges against favoring poor people when they appear in court as litigants going up against wealthy adversaries, the clear point being that the justice system derives its authority at least in part from its supreme impartiality and that this quality cannot be compromised merely because one litigant is less well-off than the other. Still a third law, this one from Deuteronomy, cautions against taking the tools of a poor person’s trade as collateral when lending him or her a sum of money. There the logic is obvious and impeccable: if you take away the tools of a debtor’s trade, how can that person be expected ever to earn the money to pay back the loan? And in a similar vein is the law, also in Deuteronomy, requiring employers to pay their poorest employees on a daily basis no matter how inconvenient that might be: if someone needs his or her daily wages to feed a family or to purchase basic necessities, then the inconvenience of the employer is not taken into account and the worker’s wages must be paid out daily.

All of the above-mentioned laws seems to suppose easily that the existence of poor people is a basic feature of society, and a permanent one. But there is one extended passage in the Torah that attempts to depict the process whereby people slip into poverty. The portrait is a moving one, beginning with someone short of funds who sells off some of his family’s land to raise some cash. He retains the right to redeem the land—that is, to buy it back—but even if he cannot afford to do so the land reverts to his possession in the jubilee year. That doesn’t sound so bad—he gets the land back and keeps the purchase price! (On the other hand, there are jubilee years only twice a century. So it’s not as good a deal in the third year of the cycle as it would be, say, in the forty-seventh.) But then the Torah moves forward and imagines a man with no more land to sell. This person has to borrow money, and then must pay it back. He does not have to pay interest—it is forbidden for one Israelite to charge interest when lending money to another—but neither is he working for himself any longer: whatever he earns must go not to raising his own standard of living but to paying back his debt. And then we imagine a third stage in the descent into poverty, the one in which a person can no longer borrow money in the normal way. Perhaps he lacks anything to use as collateral. Perhaps his debt load is already too high for anyone to risk lending him more money. Or perhaps he is deemed unlikely to pay back whatever money he borrows for some other reason. Once borrowing in the regular way is no longer an option, there is always another way however, one even less desirable than going into debt, to raise funds, and so this man sells himself into indentured servitude by agreeing to give up his freedom by being the unpaid employee of another Israelite to raise the money to pay back his previous debts. And finally, when there simply is no other way, the Torah imagines the worst of all fates for a faithful Israelite by imagining someone brought so low by circumstances to have to sell himself to a non-Jew who, not being bound by the laws of the Torah, will not scruple to treat him as an impoverished brother or as a landsman, but as a slave.

The passage is very moving, but I’d rather focus on its implications than its detail: here is the story of a man sliding into poverty, not born into it and certainly not condemned to it by mere circumstance. The assumption seems to be that, yes, there will always be poor people but that they do not function as a permanent caste within society that will always exist regardless of who its members might be in any given generation. Instead, the implication is that there will always be people who make poor investments, whose businesses close, whose crops fail, or who borrow unwisely. When poverty overtakes an individual, then, the Torah ordains practical ways for members of the House of Israel to relate to such a person not by regretting his or her misfortune but by lending that person money not on interest, by buying land even though it will revert to its original owner in the jubilee year, by taking someone in truly dire straits into one’s home as an unpaid employee (and thus providing such a person with a way out of debt that requires neither collateral nor property up front). In other words, the Torah views poverty as misfortune not as destiny, and ordains that the faithful strive to find ways to help not “the poor” as a class within society, but each individual who falls on hard times. And to do so kindly and respectfully as well, thereby transforming tzedakah from welfare into worship.

So where does that leave the people in the park? Since they are, as far as I can see, mostly referencing themselves as the have-nots in the equation they are putting forth (“We are the 99%”), their unhappiness has a certain self-serving ring to it that so far as failed to engage me. In an open society such as our own in which people with almost nothing can and do achieve great professional and financial success, it seems odd to argue that the chasm is unbridgeable. That only a fraction of the needy will manage to become truly wealthy is not really the point. What I think is probably the more reasonable attitude towards the disparity between rich and poor is not to whine or complain about it, but for each member of society—and certainly within our Jewish world this should be the norm, not the exception—for each individual Jewish citizen to feel personally responsible for those less well off than he or she, then to offer tzedakah within his or her means not merely by giving a few dollars to someone in need, but by creating opportunities for the needy to lift themselves out of poverty and to acquire marketable skills that will lead to gainful employment. To spend weeks camped out in a park chanting about how inherently unfair it is that society spans the gamut from poor to rich seems just a bit pointless to me. Better these people should spend a few weeks doing whatever it might take to reach out to someone even less well off than they themselves are and help that person achieve a level of prosperity in life that that person might otherwise never attain. That, it strikes me, would be a practical response to poverty in America…and in every other country as well.

Friday, October 7, 2011

Travels in Time


Our Shelter Rock computer guy, Ron Kliot, put an interesting question to me the other day, one that’s been with me ever since. How, he asked, would I respond if it were to be announced tomorrow that a time machine has been invented, patented and manufactured, and that I personally have been selected to test it out. There are, however, some serious flaws, in the machine’s operating system: time travelers will only be able to travel back in time twenty years exactly, the duration of the traveler’s visit to the past will be exactly five minutes, and the only person in the past who will be able to see or hear the time traveler during his or her visit from the future will be the twenty-year-younger version of him or herself. (Anticipating my next question, Ron also revealed one other flaw in the system: that the younger version of the traveler, the person being visited in earlier time, will somehow be able to remember whatever the later version of him or herself has to say, but not the experience itself. That’s why, supposing this were to be true, none of us remembers meeting three-year-older versions of our current selves seventeen years ago.) And so, cast in slightly weird, science-fiction-y terms was the challenge Ron laid down just before Yom Kippur for me to accept or not to accept as I saw fit: if I actually could travel back to 1991 and visit the thirty-eight year old me, and if I had exactly five minutes to say whatever I would like for that version of myself to know without having to find it out for himself the hard way over the ensuring twenty years and if the younger me would be able to recall, if not the experience itself, then at least the message I will have traveled back in time to transmit…so what exactly would I say to the younger me?

It’s a good question. My first attempts at answers—“Buy Apple!”, followed shortly by “Buy Google!”—were unimpressive and a bit childish: if I really could talk to the twenty-year-younger me, would stock tips be the best I could do? (I heard that! But surely you don’t really think that the only thing that would make your life (or my life) better or happier now would be having made more money or acquired a more cleverly-put-together portfolio back then! Yeah, yeah…me too! But, seriously, is that really what you’d use your five minutes to say? Mind you, I don’t think insider trading laws could be made to apply to advice you give yourself!) But seriously, and clever-investment-advice jokes aside, what actually would you tell yourself? Or, since Ron put this to me, let me phrase that in a more personal way: what would I tell myself if I only could? (Just for the record, Google only went public in 2004. But Apple was trading at $47 a share on my birthday in 1991 and for $345 on my birthday this year. And it closed at thirty dollars or so higher than that yesterday, even after the untimely death of Steve Jobs. Oh well!)

My father was still alive in 1992. Would now-me have warned then-me that I had less than seven years less to ask him all the things I never actually did get around to asking about? That sounds like more worthy counsel than stock tips, but traveling that particular route opens other doors I have always preferred to leave shut. Why is it exactly that I never asked my dad about his first marriage—including not even his first wife’s name or her eventual fate—or about his parents’ apparently extremely complicated relationship? Why do I know more or less nothing at all about my father’s mother’s family. And why have I never met anyone at all from my father’s father’s family other than my father and his siblings themselves? My father graduated from high school in 1934 and married my mom in 1951. Why is it I have no clear idea where he lived or what he was up to for the intervening seventeen years? There’s a photograph somewhere in one of our albums of him in Florida…but I don’t really know why he was there or how long he lived there, or even, to speak honestly, if he lived there or was just visiting. Even typing these questions out is extremely stress-inducing for me and, as I find myself formulating them I simultaneously also know that I would never have used my five minutes in the past to go there, that if I really wanted all those doors open I would have opened them on my own during the many years I could have. I was, after all, forty-six years old when my father died! And I certainly wouldn’t have needed a boost from future-me to formulate the questions that would possibly have pried them open, just the courage I lacked then and no longer need to lack now.

Twenty years ago, we lived in Richmond, British Columbia, a suburb of Vancouver situated on half-rural, half-suburban Lulu Island in the alluvial delta of the mighty Fraser, the province’s longest river. It was a kind of a paradise, British Columbia. Growing up in Queens, I hadn’t ever seen bald eagles nesting in the wild or flying around overhead. I certainly hadn’t ever walked along a riverbank and seen seals jumping in and out of the water just a few yards off the beach or owned a dog that felt entirely free to jump into the water to join them for a quick swim. I don’t believe I even knew there were such things as snowy owls, let alone that I would be able eventually to recognize their call (something like a hoarse gawwwwh) from a distance, and to distinguish it from their alarm bark (which is more like quacking than gawwwwh-ing). You get the picture. It was beyond gorgeous. We throve there for as long as we stayed, or we thought we did. But perhaps if I could have five minutes with the 1991-iteration of myself I would suggest focusing a little less on the owls and a little more on my children…and asking myself if all that natural splendor was worth the price we paid for living as far as we did from the kind of Jewish life that they only got to know once we finally landed in New York. (California, if anything, was a step down, not a step up, in that regard. I’ll write about that whole experience some other time.) I don’t regret our time in B.C., not even a little bit. But I do think that now-me would have used at least a minute or two of the five to tell then-me not to linger too long at the fair…and to make sure we were back on the East Coast before it was too late for my kids truly to profit from the move.

So that’s possibly how I would use three of my five minutes: buy Apple (did you really think I’d skip that part?) and move east. But what about the other two minutes…what would I use them to say? Supposing there will be some magic mechanism built into the system that prevents time-travelers from revealing future winning lottery numbers or Kentucky Derby winners to their younger selves, I think I’d spend my final two minutes telling myself to stick with my fiction, to keep writing novels no matter what, to understand that, in the end, what my children will treasure most of my legacy will be the books and stories I leave behind for them to contemplate—biz hundert-tzvantzik—one day in my absence.

As you all know, I write a lot. I’m in the middle of a huge book project right now which you will learn more about as soon as you have a chance to read our October Shelter Rock bulletin. And the project behind that one, my Chumash (that is to say, my personal, slightly idiosyncratic translation and commentary on the five books of the Torah) is coming up behind that once I finally get The Observant Life off my work desk.

As many of you know, I do most of my writing in the morning before minyan and in the evening, if I can stay awake, after minyan if I don’t have anything keeping me at shul after that. I spend a lot of my Wednesdays at my desk trying to write. And Joan and I generally take our vacations in the summertime somewhere that I will be able to spend at least a few hours a day at my desk. But, for all I try to devote myself to my writing, it’s also so easy not to find the time, not to have the energy to focus on something that requires the kind of concentration I try to bring to my writing when what I really feel like doing is watching a rerun of Law and Order on television and going to bed. I do it…but I could do it more assiduously. And maybe that is what I’d tell my 1991-self if I could: that most of everything eventually fades away, but that not everything does. And that I should feel privileged, not burdened, by whatever ability I possess to express myself in words that could possibly end up constituted my most previous legacy. I feel that way about our congregation as well, by the way: I serve Shelter Rock in many different ways, most of which I have in common with most other rabbis. But my personal gift to our shul is my writing. I’ve been gratified over the years by the reception you have all accorded to Siddur Tzur Yisrael, and to its companion volume for the house of mourning, Zot Nechamati. I have been pleased as well by the way Riding the River of Peace, the book for young people that we published in memory of the late Dr. Jeffrey Siegel, was received by many of you. Just this year, I felt proud and very pleased to be able to offer you the three stories we brought out for the High Holidays, “Teshuvah,” “Tefillah,” and “Tzedakah,” and I was very touched and pleased by the response they elicited from so many of you. So I think that’s what I’d use the last two of my five minutes to do: to remind myself not to give up, not to imagine that it’s all for naught, that reading is a lost art cultivated by almost no one at all in our digital age. And I hope I’d listen too!

Of course, since in this fantasy I’m spinning out for you one of the rules is that you get to remember the message but not the messenger, maybe I did learn these things from myself…not, obviously, in 1991, but perhaps in 1993 or 1995, twenty years before some point in the future at which time travel finally is possible. True, that would still leave open the question of why I didn’t buy Apple at least then. (On my birthday in 1995, Apple was trading at $42 a share, even less than a few years earlier!) But it would at least explain why we eventually knew we had to come back east, and why I have made the writing of stories and books the foundation stone upon which I have attempted to build my rabbinate.

And that is what I learned from our computer guy the other day when he stopped by to fix something in the office and, while waiting for the damned thing to boot up again, had a few minutes to shmooze with me before he had to get back to work and I had to rush off to choir rehearsal.