Thursday, December 13, 2018

The DNA of Shared Experience


The modern science of genetics has made it clear how parents can pass aspects of their own specific heritage along to their offspring. And, indeed, the reason no one finds it at all startling to assert that specific physical traits can be transmitted from generation to generation is precisely because we can see easily enough how a child’s hair color or eye color often matches one or both of his or her parents. From there, it’s not that much of a leap to considering non-physical attributes—say, a predisposition to excel at athletics or as a musician—in that same vein. And, indeed, we also all know instances of children appearing naturally to be good at some skill at which one of their parents excels (or at which both do). But can that notion be extended to include specific experiences parents may have had as well? At least at first blush that sounds like a stretch: the notion that something can happen to me and that that experience can somehow end up encoded in my DNA if it only crosses some theoretic line of genetic responsivity—that feels hard to imagine. But learning about the science of epigenetics has altered my thinking in that regard, and altered it powerfully. What I’ve learned in that regard is what I want to write about this week. And also about its implications for my understanding of the nature of Jewishness itself.
I was prompted to start taking the possibility of the transmissibility of experience seriously by a study published in the Proceedings of the National Academy of Sciences just this fall. Written by Dora L. Costa, a professor of economics at UCLA, and by Noelle Yetter and Heather DeSommer of the National Bureau of Economic Research in Cambridge, Mass., the study focused on data from Civil War days and concluded that the sons of Union Army soldiers who suffered severe trauma in the course of their time as prisoners-of-war in Confederate prison camps were significantly more likely to die without reaching old age than the sons of Union soldiers who were not captured or incarcerated by Confederate forces. Since all the sons in the study were born after the end of the war, the study suggests that they must have—or at least could have—somehow inherited their fathers’ traumata and suffered from their aftereffects. There were even subtle gradations of experience to consider: the sons of men who were imprisoned in 1863 and 1864, when conditions for prisoners of the Confederacy were especially brutal and inhumane, seem to have been even more likely to die as young people than the sons of Union soldiers taken captive earlier on in the war. (To see the original study, click here. For an excellent analysis of the study published in The Atlantic last October that most readers will find far more accessible, click here.)

The phenomenon has been demonstrated to exist in the animal kingdom as well. A few years ago, for example, scientists were able to demonstrate that when mice who were trained to associate the smell of cherry blossoms with the pain of electric shocks were bred to produce offspring, both the next generation and the generation after that responded anxiously and fearfully to the smell of cherry blossoms in a way wholly unlike mice whose parents or grandparents hadn’t been trained to associate that scent with that level of pain. The study, written by Emory University School of Medicine professors Brian G. Dias and Kerry J. Ressler and published in the journal Nature Neuroscience, concluded that the mice had somehow inherited a response built into their parents’ history of experience. (For a very interesting account of this experiment published in the Washington Post a few years ago, click here.)
And then there came the study of the Dutch famine victims. In the winter of 1944-1945, to punish the Dutch for having attempted to assist the Allied advance into Europe by shutting down the railway links that were being used by the Germans to bring troops to the front line, the Nazis blocked food supplies from coming into Holland so severely that more than 20,000 people died of starvation by the time the war ended the following spring. (For more information, the best source is Henri A. van der Zee’s The Hunger Winter: Occupied Holland 1944-1945, published in 1998 by the University of Nebraska Press.) This would just be one more horrific story of German savagery during the war, but it led to some interesting scientific studies, one of which was published jointly by seven scholars led by Professor L.H. Lumey of Columbia University in the International Journal of Epidemiology in 2007 and which appeared conclusively to prove that the wartime experience of famine impacted not only on the poor souls who had to live through that dreadful winter in the Netherlands, but on the children born to them after the fact: as a group, the children of people who lived through the famine and survived to become parents later on experienced higher rates of obesity, diabetes, and schizophrenia than their co-citizens. They were also noticeably heavier than Dutch people born to parents who did not live through the Hunger Winter. And they died younger than other Dutch people did on the average—the study found that people born to Hunger Winter parents were still experiencing a full 10% higher rate of mortality even sixty-eight years after the famine ended. How exactly this all works, or might work, is beyond me. (Click here for the study itself and here for a far more easily understandable summary of its results published in the New York Times last winter.) But what seems easy to seize is the basic principle that the severe trauma suffered by people living in occupied Holland while the Germans were actively trying to starve the civilian population into submission was so severe that children born to those people ended up with the experience somehow encoded in their own DNA even though they themselves did not experience the famine at all.

This accords well with an essay by Olga Khazan published last summer in The Atlantic in which the author was able very convincingly to demonstrate that victims of intense racial discrimination seem as a class to experience a process called methylation on the specific genes that are connected with bipolar disease, asthma, and schizophrenia, and that this specific genetic change too can be passed along to subsequent generations. What methylation is exactly is also beyond me. The simplest explanation I could find was on the www.news-medical.net website (click here), but was still far too sophisticated for a mere liberal arts major like myself to fathom. The basic principle, though, is clear enough: the experience of intense discrimination can apparently imprint itself on your DNA in a way that makes it possible for children born to you even long after the fact to have to deal with traumatic experiences they didn’t personally experience because those experiences have somehow ended up encoded in their DNA.
And that brings me to the Jewish angle in all of this.  Just two years ago, Dr. Rachel Yehuda of Mount Sinai Hospital here in New York discovered evidence of this methylation process affecting the gene associated with stress not only in the DNA of Shoah survivors, but in the DNA of their descendants as well.  The study, published in Biological Psychiatry in 2016 and not meant for any but specialist readers, was not universally praised—but mostly because only survivors and their children were included in the study, but not the survivors’ grandchildren or great-grandchildren. (For an example of a hostile response published in the U.K. in 2015, click here.) But those scientists are basing themselves, as scientists surely should, on the specific way Dr. Yehuda used the empirical data that was available to her. I, on the other hand, not being encumbered by an actual background in science, find her work wholly convincing and more than easy to believe. Indeed, I have spent my whole life wondering where Jews like myself, who can reasonably say that not a single day has passed since adolescence in the course of which some thought or image related to the Shoah has not surfaced invited or uninvited in my consciousness, from whence this obsessive involvement in the Holocaust derives. Both my parents were born in this country. Therefore, neither was personally a survivor. But both were adults when the war ended and the details regarding the camps and the mass executions became known; the trauma the Jewish community experienced over the months that it took for the true story to become known—and the compounded trauma of slowly coming to terms with the degree to which the Allied forces chose consciously not to interfere with the daily transfer of Jews to the death camps—that trauma, it now seems to me, is at the core of my own worldview, of my own sense of who I am and what the world is about.

Nor is this just about the Shoah in my mind. Ancestry.com recently updated my DNA profile and declared me genetically to be 100% Ashkenazic Jewish. (I had previously been hovering between 96% and 97%). Am I also carrying around the epigenetic markers associated with the First Crusade and the butchery and devastation the Crusaders brought to the defenseless Jews of the Rhineland? I suppose that will only seem an obscure question to readers who don’t know me personally. 

Thursday, December 6, 2018

Chanukah 2018

For most North Americans, Chanukah is a sort of “us vs. them” affair: the foe wanted to obliterate us (or, depending on who’s telling the story, our faith or our culture or our way of worship) but the Jews of that time were unexpectedly, even perhaps miraculously, able to resist the enemy’s dastardly plans and to chase the minions of the evil king back to wherever it was they came from before they could bring their despicable plan to fruition. Doesn’t that sound about right?

Like all (or at least most) oversimplifications, this one is not entirely incorrect. There really was a King Antiochus on the throne of the Seleucid Empire—the Greek-speaking kingdom with its capital at Antioch in today’s Syria that ruled over the Land of Israel in the second century BCE—and he did promote the eradication of traditional Jewish norms of worship even in as sacred a space as the Jerusalem Temple to make them more universal and less ethnically distinct. There was every reason to expect the ragtag group of guerilla warriors who gathered around the Maccabees—who seem to have come out of nowhere to do battle with Antiochus’s legions—there really was every reason to expect them to go down to defeat, yet they were successful and managed against all odds to expel the king’s armies from what was in those days, after all, a province of his own empire and—even more unimaginably—to wrest some version of autonomy from the central government and thus to install a kind of self-rule that lasted for almost a century. And if the darker part of the story—the one we generally ignore featuring large numbers of Jewish people more than eager to make Jewish ways less particularistic and more in step with the great cultural tide of the day (called Hellenism, literally “Greekishism,” because of its origins in the culture of classical Greece) and very happy to have the king’s support in their effort to reform the Jerusalem cult and make it more appealing to themselves and to outsiders looking in—if that part is generally ignored, that’s probably all for the best. Who wants an ambiguous yontif anyway? Much better to stick with the Hebrew School version and not to stir the pot unnecessarily! We don’t have enough to deal with as it is?
This week, therefore, I would like not to talk about the well-known part of the Chanukah story and its key players at all. (Shelter Rockers will hear me speak about that part of things in shul on Shabbat anyway.) Instead, I’d like to start the story in media res and begin to say why Chanukah really does still matter by introducing a personality that almost no readers will ever have heard of, one Judah Aristobulus.

And here he is, at least as Guillaume Rouillé, the inventor of the paperback, imagined him in sixteenth-century Lyons. But who was he really? And why do I want to start my peculiar, start-in-the-middle version of the Chanukah story with him of all people?
Everybody has heard of Judah the Maccabee and most know that he had several brothers as well as a famous father. But what exactly happened to them all—that is the part no one knows. And more’s the shame, that—because the most profound part of the story is precisely its least-well-known part.

Jerusalem was taken in the year 164 BCE, but the fighting continued for years and, indeed, Judah himself died in battle in 160 and was replaced as commander-in-chief of the Jewish army by his brother Jonathan, who at the time was already serving as High Priest. Jonathan was as much a politician as a general or a priest, however…and he made a fair number of enemies by attempting to transform an autonomous Judah within the larger Seleucid empire into a truly independent state by signing treaties with any number of foreign countries. He lasted for almost two decades, but was finally assassinated by someone who apparently found his politics intolerable and was succeeded by his brother Simon, the last of the original Maccabee brothers.  The inner politics of the day is interesting enough, but what fascinates me in particular is the way that the Maccabees, who started out only wishing to prevent the Seleucid emperor from disrupting traditional Jewish life, became more and more intoxicated with the power they saw themselves able to seize. Judah was a kind of a general. Jonathan was a general and High Priest.  And Simon convened a national synod that formally recognized him as Commander-in-Chief, High Priest, and National Leader. Most important of all, he negotiated a treaty with the Roman Senate that cut the Seleucids out of the action entirely and acknowledged solely the Maccabees as the legitimate rulers of their land.
The story only gets bloodier. Simon was murdered in 134 BCE by his son-in-law, a fellow named Ptolemy, and thus became the first Maccabee to be succeeded not by a brother but by his own son, a man known to history as John Hyrcanus. In his day, the war with the Seleucids flared up again. The details are very confusing, but the basic story is simply that the Seleucids took back all of Israel except for Jerusalem itself, then abandoned it all when Antiochus VII died in 129. Indeed, as the Seleucid empire slowly fell apart, John Hyrcanus embarked on a military campaign to seize what he could of the adjacent world. And he was successful too, conquering a dizzying number of neighboring states, in the course of at least one campaign, the one against the Idumeans (the latter-day Edomites), he forced an entire nation to convert to Judaism. Most important of all, he cemented the nation’s relationship with Rome, agreeing to work only in the best interests of the Roman Republic in exchange for their agreement to recognize Judah as a fully independent state. He established relations with Egypt and Athens too, thus making Judah into a real player on the international scene. And then he died in 104 BCE, one of the very few Maccabees to die of natural causes.

His eldest son was Judah Aristobulus. The original plan was for Judah to become high priest and for his mother to become the political leader of the nation. Judah Aristobulus (also sometimes called Aristobulus I) found that irritating, however, so he imprisoned his mother and allowed her to starve to death in jail. Then, for good measure, he also imprisoned all his own siblings but one. (He had that one killed eventually too.) And it was this Judah Aristobulus who, not content with just being High Priest, commander-in-chief, and political leader, also named himself king.
It didn’t last. He himself didn’t last—he was sickly to start with and then, after one single year on the throne of Israel, he too died and was replaced by his oldest brother, known to the Jews as King Yannai and to the rest of the world as Alexander Jannaeus.

It’s easy to get confused by the details. I’ve read the part of Josephus’s Antiquities of the Jews that covers the Maccabean years—the only sustained, detailed narrative covering the entire period—a dozen times. It couldn’t be easier to get lost in the forest amidst so many different trees—and the fact that there are so many different people with the same names only makes it more confusing. But when you step back and look at the larger picture, you see something remarkable…and deeply relevant to our modern world.
The Maccabees—known to history more regularly as the Hasmoneans—started out as highly and finely motivated as possible. They had an emperor ruling over them who held their national culture in disdain, so the Maccabees rose up and somehow won a measure of autonomy for their people that most definitely included the right to run their own cult and to pursue their own spiritual agenda. But the power they won on the battlefield corrupted them from within, leading them not only not to act in the nation’s best interests but to cross a truly sacred line when Judah Aristobulus finally broke with the very religious tradition his family came to prominence to protect by declaring himself king.

He wasn’t from the tribe of Judah. (The Maccabees were priests, so of the tribe of Levi.) He wasn’t descended from David. He had no legitimate or even illegitimate claim to the throne. But he took it anyway…and that act of self-aggrandizing sacrilege set the stage within just a few short decades for a massively blood civil war undertaken by two of his nephews who were vying for the crown, which disaster opened the door to the Romans who saw in it an opportunity to occupy Judah and make it part of their empire, which they did in 63 BCE. The next time Jews managed to declare in independent Jewish state in the Land of Israel was in 1948 CE, a cool 2011 years later.
It is never a good thing when a nation’s leaders see in public service not a way to contribute to the welfare of the nation but an avenue for self-aggrandizement, self-enrichment, and self-promotion. The Maccabean descendants became wealthy and powerful. They hobnobbed with the delegates from the world’s most important nations, including the world’s sole super-power at the time, the Roman Empire. They reduced even something as innately sacred as the office of High Priest to a mere stepping stone capable of leading to still greater authority. As they became more and more entangled in their own inner-familial struggles, they relied increasingly on generals who themselves had a wide variety of personal agendas to pursue. And then they crossed the line and, in an act of spiritual madness, made themselves the kings of Israel despite the fact that they had no justifiable claim to the crown.

Public service is a burden and a privilege. Our greatest political leaders have always been people who saw that clearly and who allowed themselves to be saddled with the millstone of public office out of a sense of personal honor and deep patriotism. We have had American leaders that like—Abraham Lincoln, I believe, was such a man—and our nation is the richer and better for their service. But the larger story of Chanukah—the one we never tell in Hebrew School—has its own deeply monitory lesson to teach: that greatness in governing is a function always of personal character…and never one of mere opportunity. 

Friday, November 30, 2018

Rock of Ages


Any of my readers who hear me preach at Shelter Rock regularly know that one of the themes I return to over and over is the mystery of things, of stuff. Three hundred years ago, in 1718, the world was a different place in many ways. New Orleans had just been founded in what was then called New France. Spain, a naval superpower, was at war simultaneously with the Holy Roman Empire, the U.K., France, and Holland. The potato had just been brought to the New World, more specifically to New England, where it not only flourished but quickly became a staple of the North American diet. Coffee beans were new too, brought in that year for the first time to the New World (more specifically, to Surinam in South America) as well. So, at least at year’s beginning: no French roast, no French fries, and no French Quarter! But the world was full of people—more than 600 million of them by most estimates. (Some were French, obviously.) And all of them had stuff.
Jewish people also had stuff in 1718, and a lot of it. There were something like 7 million Jews in the world as the eighteenth century dawned. All the married couples had k’tubbot. All the wives had wedding bands. All the men—or surely lots of them—owned t’fillin. Each synagogue had an aron kodesh filled with Torah scrolls. Every community owned a m’gillah from which it read on Purim. In each home, or surely in most, was a seder plate. Each community—and with no exceptions at all—kept a record of births and deaths, a list of who was buried in which grave in the cemetery they maintained, a ledger of contributions solicited, received, and acknowledged, and a record of circumcisions and marriages.

Some of this stuff survived. There are, for example, many Jewish communal record books in the manuscript and rare book libraries of the world. And some of it simply was not built to last and, in fact, did not survive into the modern day. But what about the rest of everything? That’s the question that continues to fascinate. It’s unimaginable—and truly so—that anyone would throw out his or her parents’ k’tubbah after their deaths.  But even less conceivable is the image of anyone not wishing to keep a beloved mother’s wedding band or the veil she wore at her wedding or her own mother’s ring or veil. But if that is the case—which, speaking realistically, it surely must be—then where is all that stuff?
I’m a good example. I know where my mother’s wedding band is. (It is on the ring finger of Joan’s left hand.) But where is my parents’ k’tubbah? And where are their own parents’ k’tubbot? And where are my grandmothers’ wedding bands? I had two grandmothers when I was born, one of whom, my father’s mother, died when I was only four years old. But my other grandma, my mother’s mother, I remember well…and I remember her wearing a wedding band, a plain gold band that she wore for decades after my grandfather died. (She gave me my first piano lessons, and I can still see her hands at the keyboard.) Was she buried wearing it? I suppose she might have been, although generally speaking that isn’t our custom. But if she’s not wearing it, then who is?  I can’t see my mother or her sister, my grandmother’s only two children, selling their mother’s wedding band for a few dollars! Nor did either of them wear it, not my aunt and definitely not my mother. So where exactly is it?  That’s the question that occupies me.

Einstein famously once wrote that the distinction between the past, present, and future is “just a stubbornly persistent illusion.” I’m sure I don’t fully understand what he meant—the difference between the past and the future feels pretty non-illusory to me—but whatever of that thought I can process is related to these other ideas I’ve been writing about. The present feels so real, so permanent, so solid. Is that sturdiness an illusion that dissolves easily in the flow of moments so that all the things of this world are simply present in the here and now but specifically not guaranteed by their existence in that mode to survive into the future? I wrote before about 1718, but I could also have written about 1018, a full thousand years ago. The Vikings were in their fullest flower back then, raiding Scotland and the north German coast with relative impunity. But there were Jewish communities even then all across Europe, North Africa, and the Middle East. And the wives in those communities also had wedding bands and k’tubbot. You see where I’m going with this. Did Prince have it right back in the ’90s when he sung that “life is just a party…and parties aren’t made to last”? I wasn’t much of a fan, but maybe I should have been!
But some things do manage to last. And when they do, they become symbols—at least in my mind—of the way the apparently terminal ephemerality of things need not point to the conclusion that history itself is mere midrash, that that fact that the things of the past seem destined to vanish does not make of history itself something that also only exists in the recollective consciousness of humankind rather than as part of real, if absent, reality.

A tiny stone has surfaced that speaks directly to this set of issues…and also directly to me personally. 

It’s just a tiny thing, a pebble really. But because it was found in Jerusalem by archeologists working under the supervision of the Israel Antiquities Authority in dirt taken from beneath Robinson’s Arch (a site adjacent to the Western Wall, in 2013 and only now finally fully sifted) and because it has one single word etched into it, this mere pebble steps out of the flow of moments to speak to us and to remind us that, as Faulkner wrote in Requiem for a Nun, the past is not only not dead and gone, it’s never even really past at all.

The word is the Hebrew beka, the name of a specific weight. The first letter, the bet, is written backwards. Also, even more mysteriously, the letters are written from left to right rather than from right to left. Was the engraver dyslexic? Or was he (or she!) perhaps illiterate, thus someone merely copying letters without being able personally to read them and not doing too good a job? Or—given that there are several personal seals that have survived from the First Temple period that feature an analogous kind of mirror script— was the left-to-right thing intentional? To none of these questions is there an answer, nor ever will there be. What a beka itself was, though, isn’t hard at all to know: the measure is mentioned in the Torah twice, once to describe the specific amount of gold in the nose ring that Abraham’s man Eliezer offered to Rebecca when he first encountered her and realized that she was destined to be Isaac’s wife, and once in the context of the half-shekel annual tax that served as an annual count of adult Israelites: [you shall pay] one beka per head, [that is,] half a shekel…for each person over twenty years of age counted in the census.”
Eli Shukron, the lead archeologist on the site, explains the system clearly: “When the half-shekel tax was brought to the Temple during the First Temple period, there were no coins, so they used silver ingots. In order to calculate the weight of these silver pieces, they would put them on one side of the scales and on the other side they would place the beka weight. The beka was equivalent to the half-shekel, which every person from the age of twenty and up was required to bring to the Temple.” It appears, he goes on to say precisely, that the biblical shekel, a weight rather than a coin in the modern sense, weighed exactly 11.33 grams.

When I look at this coin, I really do get what Faulkner meant as this thing from ancient times surfaces in our own day to remind us that the past isn’t some sort of midrash on present-day reality invented by moderns to explain why things in the world are the way they appear to be, but an actual record of what once was.
When people debate the status of Jerusalem as though it somehow wasn’t the capital of Israel in ancient times, as though the Temple Mount somehow wasn’t the site of not one but two different Temples that served in their respective eras as the spiritual center of all Jewish enterprise, or as though the relationship of the Jewish people to the Land of Israel is some sort of colonialist fantasy as little rooted in reality as the Belgians’ claim to the Congo or the British claim to India—words somehow fail me as I feel overwhelmed by some unholy amalgam of contempt, irritation, and anger towards people for whom history really is whatever you wish it to be. And then support comes, as Scripture says it always does, from some unexpected corner of the universe. A stone—really just a pebble—appears in the world. It’s tiny. It weighs almost nothing. Its inscription was either intentionally or unintentionally written contrary to the normal rules of written Hebrew, or what we moderns suppose they must have been in First Temple times. It languished under a mountain of dirt not for centuries but for millennia. And then, out of nowhere, it rolls out onto centerstage and says its sole word: beka. And packed into those two ancient syllables are several things: the courage of my own convictions, the undeniable evidence of history confirmed, and no little amount of hope for the future. Not bad for something so tiny it is dwarfed by an average-sized human hand in the photograph above and, at the end of the day, has a vocabulary that consists of a single word. Not bad at all!

Thanksgiving 2018


For as long as any of us can recall, American Jews have celebrated Thanksgiving out of a deep sense of gratitude to God for any number of different things that define our lives in this place: the great prosperity of this land in which we share; the security provided for us and for all by our matchless and supremely powerful military; the freedoms guaranteed to all by a Bill of Rights that basically defines the American ethos in terms of the autonomy of the individual; the specific kind of participatory democracy that grants each of us a voice to raise and a ballot to cast; the freedom to embrace a minority faith—or any faith—without fear, reticence, or nervousness about what others may or may not think; and the inner satisfaction that comes from being part of a nation that self-defines in terms of its mission to do good in the world and to combat tyranny, oppression, and demagoguery wherever such baleful things manage to take root among the peoples of the world.
None of any of the above strikes me as being anything other than fully true, yet I can’t stop reading op-ed pieces and blog postings that posit that things have somehow changed, that the world now is not as it even just recently was, that it is the past and all its glories that shine bright now rather than the unknown—and unknowable—future, and that every one of the reasons listed above for us American Jews to join our fellow citizens in feeling deeply grateful for our presence in this place could just as reasonably be deemed illusory as fully real. And I hear those sentiments, interestingly enough, coming from people on both ends of the political spectrum as well as from all those self-situated just to the right or left of center. Nor are American Jews alone in their ill ease: if there is one thing vast swaths of our American nation seem able to agree upon, it’s that the age of great leadership belongs to history and that it is thus our destiny for the foreseeable future to be led by people whose sole claim to serve as our nation’s leaders is that they somehow managed to get themselves elected to public office. No one seems to dispute the fact that this is not at all a healthy thing for the republic. But expressing regret is not at all the same thing as formulating a specific plan to address the situation as it has evolved to date.

To keep this creeping malaise from interfering in an untoward manner as we prepare to celebrate our nation’s best holiday, I suggest we take the long view.
Frederic E. Church was a nineteenth century man, born in 1826 when John Quincy Adams was in the White House and dead on the 7th of April in 1900 as a new century dawned. He was also one of America’s greatest landscape painters, a member of the so-called Hudson River School and, in his day, one of the most celebrated artists alive. I mention him today, however, not to recall the larger impact of his oeuvre, but to tell you about one single one of his paintings, the one called “The Icebergs.”


As you can see, the picture (currently owned by the Dallas Museum of Art) is magnificent. But what made it famous in its day was specifically the way in which it was taken by many to capture the surge of self-confidence that characterized America’s sense of its own destiny at the end of the nineteenth century. One author, Jörn Münkner, characterized the painting’s appeal in this passage composed when the painting was put on exhibition at Georgetown University:
Frederik E. Church's "The Icebergs" pictured the Alpha and Omega of time and tide. It reflected the mid-19th century American world-view that was characterized by the belief in a “Manifest Destiny” according to which the United States…was the New Israel that had been prepared for by the divinity. 1861 saw the U.S. reigning from the Atlantic to the Pacific, from the Gulf of Mexico to the Great Lakes. Nature was regarded as holy and science as sanctified. The belief in the American Garden Eden whose very fortunes were guided by the Creator emanated out of the scientifically correct “The Icebergs.” It was the display of the rare and intoxicating American amalgam of science, religion, and nationalism. The relationship of the actual and the real that was concealed in the painting revealed the idea/fact that scientific thinking in America was shaped by a deep religious faith. Providence guided the scholarly painter's hand.

I find those words somehow inspiring and chilling at the same time, but I see what the author means: even after all this time, the painting hasn’t really lost its ability to suggest the majesty of nature or its timelessness. I get a bit lost on my way from that thought to the notion of manifest destiny inspiring America’s nineteenth-century rise to greatness (and, yes, the whole America as the new Israel is beyond peculiar, as surely also is the fact that the artist was thinking so expansively about American destiny on the eve of what in 1861 would still have been unimaginable carnage), yet I really can see the strength, the power, and the sense of ineluctable kismet mirrored in the majestic icebergs in the picture…and so finding in them a symbol both of America’s uniqueness and of its remarkable destiny is not as big a stretch as I thought at first it would be.
But other nineteenth-century types saw different things in the image of these gigantic icebergs afloat in an endless sea.

Edward Bellamy, once one of America’s most famous authors, has been almost completely forgotten. Yet his 1888 book, Looking Backward, was the third most popular American novel of nineteenth century, exceeded in fiction sales only by Uncle Tom’s Cabin and Ben-Hur. An early utopian novel, the book tells the story of one Julian West, a young man from Boston who goes to bed one night in 1887 and somehow manages only to wake up from his sleep in the year 2000. Some of the author’s predictions are uncannily correct—he depicts West as enjoying the almost instant delivery of goods ordered without having to visit any actual stores—while other things West finds in 2000, like a universal retirement age of 45, have not turned out quite as the author imagined they might. But it is the author’s postscript to his own work I want to cite here, as he imagines America in the future and uses his own version of the iceberg symbol to express his dismay. Almost definitely thinking of Church’s painting and the expansive optimism it inspired, he wrote as follows:
As an iceberg, floating southward from the frozen North, is gradually undermined by warmer seas, and, become at least unstable, churns the sea to yeast for miles around by the mighty rockings that portend its overturn, so the barbaric industrial and social system, which has come down to us from savage antiquity, undermined by the modern humane spirit, riddled by the criticism of economic science, is shaking the world with convulsions that presage its collapse.

This line of thinking I also understand: for all it appears mighty and invincible as it rises from the sea, icebergs are, after all, just so much frozen water. They melt as they float into warmer waters than can sustain them, which may (or may not) dramatically affect the ocean into which they dissolve but cannot affect the iceberg itself once it disappears into the sea and is no more.
So one image and two distinct interpretations. Of course, both are right. An inert, uncomprehending iceberg was powerful enough to sink the most sophisticated ocean liner of its day in 1912. And the semi-famous iceberg rather prosaically named B-15, which broke away from Antarctica’s Ross Ice Shelf in 2000, is about to melt into the South Atlantic Ocean. At 3,200 square nautical miles, B-15 is larger than the island of Jamaica. Yet its doom was sealed not by weapons of mass destruction or acts of God, but by the sea’s slightly too-warm water. (To read more, click here.) From this we learn that strength and weakness are not as unrelated as their antithetical nature makes them at first appear. Indeed, they are each other’s twins…and from that thought I draw the lesson I wish to offer to my readers for Thanksgiving Day in the Age of Anxiety.

Our nation is currently divided down into people who see America’s great and mighty presence in the world pointing to a remarkable destiny framed by our nation’s ongoing commitment to the foundational principles upon which the republic was founded and still rests. Such people look at Church’s painting and are heartened by what they see because solid, powerful, majestic icebergs afloat in the sea remind them of our nation, its strong moral underpinnings, its commitment to (the American version of) tikkun olam, and its invincible military. This group includes members who vote red and who vote blue, but others see our nation coming apart at the seams, a country divided down into warring factions in which personal liberty is increasingly defined in terms of the sensitivities of the majority and in which justice is meted out entirely differently to people of different races and social strata. Such people look at Church’s painting and hear Bellamy’s warning that even giant icebergs that look stable and impregnable can be undermined by the gentle, unarmed presence of a warm current in the sea. Nothing lasts forever. Every Achilles has his heel. No garden thrives because it was once watered.  
So who is right? I propose we give the last word to Bellamy himself, whose afterword to his own novel (which I am currently reading for the first time) closes with these words: “All thoughtful men agree,” he writes, “that the present aspect of society is portentous of great changes. The only question is whether they will be for the better or the worse. Those who believe in man’s essential nobleness lean to the former view, those who believe in his essential baseness to the latter. For my part, I hold to the former opinion. Looking Back was written in the belief that our Golden Age lies before us and not behind us, and is not far away. Our children will surely see it, and we too who are already men and women, if we deserve it by our faith and by our works.”

Despite it all, that’s what I think too! And I offer that thought—part prayer, part wish, part hope—to you all on this Thanksgiving Day, a day on which all Americans are united by the desire to recognize the good in ourselves and our nation, and to be grateful for the potential to do good in the world that derives directly from that noble sense of what it means to be an American.

Thursday, November 15, 2018

Hatred, Fear, Hope

Like most Jewish Americans, I was caught off-guard back in 2017 by the sight of white supremacists marching in Charlottesville, Virginia, and carrying aloft the flags of the Confederate States of America and Nazi Germany.  (That they were also carrying the so-called Gadsden Flag that was originally used by the Continental Marines during the American Revolution—the one designed back in 1775 by Christopher Gadsden featuring the words “Don’t Tread on Me” beneath a coiled-up, scary-looking rattlesnake—struck me primarily as a sign of how little these people know about the values upon which the nation was founded in the first place.)  The sight of those flags being held aloft proudly and defiantly was beyond upsetting, but not particularly confusing. But what was confusing—to me and I suspect to most—was the chant “Jews will not replace us,” which I hadn’t ever heard before and which I now realize I misunderstood, taking it to mean something entirely different than what it apparently does mean.

Taking the slogan at what I thought was face value, I understood the marchers to be declaring their determination not to allow themselves to be replaced by Jews eager to take over their jobs and leave them without work and eventually destitute.  In other words, I imagined this somehow to be tied to the marchers’ skittishness about the job market and their need to find someone to blame in advance for losing jobs they fear they only haven’t lost yet and in which they fear they will eventually, to use their own word, be “replaced.” It hardly seems like a rational fear, but that’s what it felt like it had to mean, and so I ended up taking it as just so much craziness rooted not in anything corresponding to actual reality but in the malign fantasy that, left unchecked, we Jewish people will somehow take over the world and install our own people in whatever jobs we wish without regard to where such a move would leave the people currently holding them. And that is what I sense most Jewish people—and maybe even most Americans—hearing this chant took it to mean.
But now that I’ve read more, I see that that is specifically not what “Jews will not replace us” means and that the slogan specifically is not about Jews replacing Christians at work at all. Instead, the chant encapsulates the marchers’ fear that we Jews are working not to take over their jobs ourselves but to replace them at work with third-party others chosen specifically to deprive them of their livelihoods and their places in society. And who might these other people be? That, it turns out, is where anti-Semitism and racism meet: the hordes of jobseekers the marchers fear turn out not to be Jews at all, but hordes of dark-skinned immigrants feared already to be pouring over our borders and insinuating themselves into an already-tight job market. And it is those people who, because they are presumed ready to work at even the most menial jobs for mere pennies, are imagined to be threatening the white (i.e., non-immigrant) people who currently hold those jobs and who earn the American-sized salaries they use to support themselves and their families.

To say this is crazy stuff is really to say nothing at all. Yes, we have a huge and so-far-unresolved issue in this country with illegal aliens living in our midst and I’m sure that those people do take jobs that legal residents might otherwise have. And lots of non-crazy people, myself definitely included, are eager to find a way out of this morass that we ourselves have created by failing to police our borders adequately and by allowing the number of undocumented illegals in our midst to grow from a mere 760,000 or so in 1975 to something like 12.5 million today with no obvious solution in sight.
So wanting a reasonable solution to be found—one that is fully grounded both in settled U.S. law and in our national inclination to be just, fair, kind, and generous, and one that doesn’t make after-the-fact chumps out of all those countless millions of people who followed all the rules and immigrated here fully legally—is not crazy at all. What is crazy is the fantasy that Jewish Americans somehow possess the secret power to order Walmart’s and Costco and every other American business to fire specific employees and replace them with pre-selected others regardless of whether those others are or are not here legally. Crazier still is the contention that American Jews somehow control American immigration policy, and that we are somehow able imperiously to issue instructions that must be obeyed both to Democratic and Republican administrations. But craziest of all is the belief that, precisely because American Jews are so supremely powerful, we must be attacked violently before we order the administration to let even more immigrants into our nation. That, after all, was the specific reason the Pittsburgh shooter gave for his savagery in a comment posted online just before the attack: to give the officers of HIAS pause for thought before they work to bring in any more “invaders [to] kill our people.” My post-Pittsburgh proposal is that we stop dismissing that line of thinking as aberrant looniness that no normal person could actually embrace and start taking it far more seriously.

It feels natural to consider the various kinds of prejudice that characterize our society as variations on a common theme. And in a certain sense, I suppose, that is true. But these pernicious attitudes are also distinct and different, both in terms of their root causes and the specific way they manifest themselves in the world: misogyny, racism, and homophobia, for example, are similar in certain cosmetic ways, but differ dramatically in terms of the specific malign fantasies that inspire them and thus should (and even probably must) be addressed in different ways as well. And we should also bring that line of thinking to bear in considering anti-Jewish prejudice: similar in some ways to other forms of prejudice, anti-Semitism also has unique aspects that it specifically does not share with other forms of bigotry. Indeed, the fact that the anti-Semitism put on public display in Charlottesville was rooted in the haters’ groundless yet powerful fantasy about the almost limitless power imagined somehow to have wound up in the hands of the hated is all by itself enough to distinguish anti-Semitism from other kinds of prejudice. And not at all irrelevant is that it appears not to matter at all how impossible it feels to square that fantasy about Jewish powerfulness with the degree to which powerless Jews have suffered at the hands of their foes over the centuries, and particularly in the last one. In that regard, I would like to recommend a very interesting essay by Scott A. Shay, the author and Jewish activist, that was published in the Pittsburgh Post-Gazette a few days after the shooting at Tree of Life Synagogue and which readers viewing this electronically can access by clicking here.  

Nor is this a problem solely of one extreme end of the political spectrum. In the wake of Pittsburgh, the spotlight is on the anti-Semitism that characterizes the extreme right, but the same light could be shone just as brightly on the anti-Semitism of the extreme left…and particularly when it promotes hostility toward Israel’s very right to exist and to defend itself against its enemies. Indeed, the assumption that Israel—instead of being perceived as an outpost of democracy smaller than New Jersey trying to survive in a region in which it must deal with nations and political terror groups that openly express their hope to see Israel and its Jewish population annihilated—is perceived as an all-powerful Goliath seeking to eradicate its innocent opponents militarily rather than to negotiate fairly or justly with them, is part and parcel of this fantasy regarding the power of the Jewish people. Coming the week after Hamas fired over five hundred missiles at civilian targets in Israel, each capable of killing countless civilian souls on the ground, the image of Israel as the aggressor in its ongoing conflict with Hamas sounds laughable and naïve. But maybe we should stop laughing long enough to ask ourselves how this myth of Jewish power—whether focused on American Jews imagined to be in control of American foreign policy or Israeli Jews imagined to be intent on crushing their innocent victims for no rational reason at all—perhaps we should ask ourselves how we might address, not this or that symptom of the disease, but the disease itself.
Distinct (at least in my mind) from theological anti-Semitism rooted in the supersessionist worldview promoted for so long by so many different Christian denominations, this specific variety of anti-Semitism seems rooted not in messianic fervor but in fear. And that, I think, is probably how to go about addressing it the most effectively: by pulling that fear out into the light and exposing it as a fantasy no less malign than inane. By forcing young people drawn to the alt-right to look at pictures of the innocents murdered in Pittsburgh and to ask themselves if they truly have it in them to believe that U.S. government policy was until two weeks ago being dictated by 97-year-old Rose Mallinger or by Cecil or David Rosenthal, both gentle, disabled types whose lives were built around service to their house of worship. By forcing young people poisoned with irrational hatred of Israel to look at the portraits of the 1,343 civilians murdered by Palestinian terrorists since 2000 and to see, not predators or fiends, but innocent victims of mindless violence. By insisting that young people drawn to fear Jews and Judaism be exposed to the stories of Shoah victims—and, if possible, to surviving survivors themselves—and through that experience to understand where groundless prejudice can lead if left unchecked and unaddressed.

To hope that no one is drawn to extremism is entirely rational, but it really can’t be enough. Just as young people who seem drawn to a racist worldview should be forced—by their parents and their teachers in school, or by society itself—to look into the eyes of those poor souls gunned down in the Emanuel A.M.E. church in Charleston on June 17, 2015, after welcoming their murderer into their midst for an hour of Bible study, so should society itself rescue young people from themselves once they are perceived to be embracing the kind of anti-Semitism that led directly to Pittsburgh…and be forced to confront the bleak hatred that has taken  root in their hearts and to see it for what it is: a fantasy rooted in fear that can be overcome and eradicated by anyone truly willing to try.

Thursday, November 8, 2018

Looking Back and Forward


So the much-anticipated midterm election came and went, leaving all Americans, regardless of party affiliation or political orientation, finally united on at least one point: that the Congress, now a bicameral house formally divided against itself, will accomplish nothing at all for the foreseeable future...unless its members can find it in their hearts to compromise with their opponents and to craft legislation so little extreme and so overtly and appealingly reasonable that people on both sides of the aisle will fear angering their constituencies by not supporting it. How likely is that to happen? Not too!  Still, that thought—that in the absence of flexibility, tractability, and generosity on the part of all, nothing at all will be accomplished and no one will have a record (other than of obstructionism) to run on in future elections—has a sort of silver lining in the thought that whatever legislation is passed by the new Congress will have to be of the rational variety that Americans of all political and philosophical sorts can support. So there’s at least that!
As my readers all surely know by now, my training—my academic training, I mean, as opposed to my spiritual training in rabbinical school—is in ancient history and the history of ancient religion. And I’ve been reading just lately some interesting analyses of the mother of all democracies, the one set in place to govern the city-state of Athens, and the specific way our American democratic system does and doesn’t preserve its ancient features and norms. Obviously, a long road stretches out between them and us! Even so, however, there are at least some features of Athenian democracy that are definitely worth revisiting.

Some of the specifics will be unexpected to most. Ancient Athens was governed by a council of 500 called the boulé whose members were chosen—not by an informed electorate casting ballots for the candidate of their choice—but by lots so that fifty men chosen at random to represent each of the ten tribes of ancient Athenians were put in place and handed the reins of government. Each served for one year, but no one could serve more than once a decade nor could any citizen serve more than twice ever. The boulé had its own hierarchy, however: its in-house leadershipcalled the prytany—consisted of fifty men, also chosen by the casting of lots, who served for one single month and were then replaced.  The idea was simple—and not entirely unappealing: by choosing both the people’s leaders and those leaders’ leaders at random, it was certain that the power of governance would specifically not rest neither with power-hungry people eager to rule over or to dominate others nor with anyone motivated by the possibility of personal gain through service to the nation. The leaders of Athens were thus disinterested parties, people with no specific yearning to be in charge yet whom fate somehow arbitrarily put into positions of leadership nonetheless. Yes, it was surely true that the inevitable blockhead would occasionally end up chosen to serve, but such a person would be vastly outnumbered by more thoughtful, more reasonable individuals. (The boulé did have five hundred members, after all.) The system has an antique feel to it, the specific point of keeping power out of the hands of people who lust after it and firmly in the hands of people who would be happier doing something else entirely, not so much!
The situation that prevailed in ancient Athens appeals in other ways as well. The boulé, for example, lacked the power to make any final decisions on its own. To do that, all citizens were invited to participate in a forum called the ekklesia that met every ten days for the specific purpose of ratifying any of the boulé’s decisions before they became law. (This body met on the Acropolis as well, in an area called the Pnyx.) All citizens were automatically members of the ekklesia and were welcome to speak up and participate in pre-vote debate and discussion. So the power was thus fully vested in the people—the boulé could pass all the bills it wanted but none of them could become law until the people signed on.



Etymologically, the “demo” in “democracy,” from the Greek demos, references the full citizenry, the people of the nation who self-governed not by electing people to govern them, but by governing the governors and by requiring that the decisions of the boulé be ratified by the public. Is this sounding at all appealing to you? The more I think of it, the more remarkable it sounds to me…and, yes, in some ways intensely appealing. Would this work in a nation of 328 million citizens like our own? Not without some serious adjustment—but the notion that the very last people to whom power should ever be granted are those specific individuals who yearn the most intensely for it, that idea has some serious merit in my mind!
And then there was the concept of “ostracism,” which I think we should definitely consider bringing back. The English word means exclusion from a group, usually because of some perceived scurrilous misbehavior. But the word goes back to Athens, where it denoted something far more specific: the right of the citizenry, the demos, one single time in the course of a year to vote to expel from the city for a period of ten years anyone perceived as having become too powerful—and thus who merely by being present in the city weakened the democratic principle of power being vested fully in the hands of the people. It didn’t happen every year, but once the decision was taken—and if more than six thousand citizens voted to ostracize by writing the name of the individual they wished to see gone on a piece of broken pottery called an ostrakon—then the “ostracized” individual was forced to leave the city and not permitted to return for at least a decade. There was no possibility of appeal. Ostracized individuals were then given ten days to organize their affairs and then to leave and not to return for ten years. There was a certain risky arbitrariness to the whole process—there was no obligation for any citizen to state why he was voting to ostracize whomever it was he was voting to exile and there was no judge or jury—but also something exhilarating about a procedure designed to place the power in the hands of the people to exile anyone at all (including civic leaders, generals, the wealthy, and the city-state’s most influential citizens) for fear that that specific individual was exerting a malign influence on the right of the people to self-govern. And there was at least one profound safeguard against abuse in the fact that the ostracized individual had to be voted off the island by six thousand citizens. Even so, the procedure eventually died out. (The last known ostracism was towards the end of the fifth century BCE.) But it is also thrilling to imagine a democratic city-state in which anyone who yearns for power must temper such yearning with the knowledge that being perceived to be acting other than in the best interests of the people could conceivably lead to being sent away regardless of the immensity of one’s fortune or the breadth of one’s influence. 


There were darker sides to Athenian democracy as well. Citizenship was limited to males over the age of eighteen; women were completed excluded both from membership in the boulé and from participation in the ekklesia.  Nor did all citizens choose to participate fully in their fully participatory democracy. Indeed, most citizens failed to show up most of the time. To increase attendance, in fact, a decision was made around 400 BCE to pay citizens who showed up for their time, thus making it more reasonable for members of the working class to take the time off to attend. But the fact remains that, just as in our American republic, the power was in the hands of those who chose to exercise their civic right to participate and not in the hands of those who chose to express themselves merely by complaining about the status quo. Is that a flaw in the system? I suppose it would depend on whether you ask the voters or the complainers!
This isn’t ancient Greece. But what we can learn from considering the political heritage bequeathed to us by the Athenians is that democracy is not manna from heaven offered to some few worthy nations and not to others, but an ongoing political theory that needs constantly to be revised and reconsidered as it morphs forward through history. There is no end to the books I could recommend to readers interested in learning more, but I can suggest two titles that I myself have enjoyed and that would be very reasonable places to start reading: A.H.M. Jones’ book, Athenian Democracy, first published back in 1986 by Johns Hopkins University Press and read by myself years ago, and also a newer book, Democracy in Classical Athens by Christopher Carey, published in 2000 by Bristol Classical Press in the U.K. 

Thursday, November 1, 2018

Pittsburgh

Like many of you, I’m sure, I took biology in tenth grade. It was a long time ago. I barely remember high school, and specific classes even less than the general experience, but I do remember one specific incident in biology class that made a huge impression on me then and remains with me still. I’m sure it’s no longer allowed, as well it shouldn’t be, but the experiment itself was simple enough. A frog was set into a petri dish filled with cool water. The frog looked happy enough, having no concept of what was soon to come…and also not able to extend its neck far enough out to see that the petri dish itself was being held aloft by a black metal frame that also housed a Bunsen burner positioned just beneath the dish in which the frog was seated. (Do frogs even have necks?) The frog could have hopped out at any moment. But why would he have? He was content, he was (he thought) among friends. It was the fall following the summer of love. I have the vague sense—although this can’t possibly be true—that “Strawberry Fields” was playing softly in the background.

And then our teacher, whose name I’ve long since forgotten, lit the Bunsen burner and the fun began. The flame was low enough so that the water would only heat very slowly, incrementally, almost unnoticed by us…but also not by the frog in the dish. The point of the experiment was simple enough: to demonstrate that, if the water were only heated up slowly enough, the frog would actually be paralyzed by the heat and thus unable to avoid the sorry end that appeared to await him and which in fact actually did await him even though he could easily have escaped his fate earlier on had he understood things more clearly. Or she could have. It really was a long time ago.
The world is full of frogs in petri dishes.

Facebook started out as a pleasant way for friends to stay in touch and then grew into something that would surely have been unrecognizable to the people dreaming it up in Mark Zuckerberg’s dorm room. And, somewhere along the way in that amazing growth from 1 million users in 2004 to 2.2 billion active users at present, a line was crossed that cannot be crossed back over, and which thus obliges Facebook to deal somehow with the unexpected and surely unwanted ability it somehow possesses to be manipulated by its own users to influence elections and to invade people’s privacy in a way that many savvy users still can’t entirely fathom in all of its complexity.
The whole concept of on-line DNA analysis started out as a clever way for people to learn more about their families’ histories and about their own genetic heritage. But as the data banks at ancestry.com, 23andme.com, and other analogous sites grow larger and larger on a daily basis, a line has been crossed there too that cannot be uncrossed and which will now oblige us all to deal with the ability of scientists, including (presumably) those who work for the government, to invade the privacy of people wholly unrelated to the enterprise and who themselves haven’t ever signed up or sent in a sample of their DNA for analysis. (To revisit what I wrote about this truly shocking phenomenon a few weeks ago, click here.)

Kristallnacht, the eightieth anniversary of which falls next week, was another such frog-in-a-petri-dish line. Things were dismal for the Jews of Germany and Austria long before 1938, but Kristallnacht—in the course of which single evening almost 2000 synagogues were destroyed, 2550 Jewish citizens died, 30,000 Jewish men were arrested and sent to concentration camps, and tens of thousands of Jewish businesses were plundered—made it kristall clear that whatever Jewish souls fell under Nazi rule were on their own and that that line into a dark, almost unimaginable future was one that simply could not be crossed back over. Indeed, the worst part of Kristallnacht was not the pogrom itself, as horrific as it was, but its implications for the future and the unavoidable conclusion to be drawn from the events of that gruesome night that there apparently was no level of anti-Semitic violence that the world could not somehow learn to tolerate. Kristallnacht, of course, did not come out of nowhere. Nazi anti-Semitism was hardly a secret. By 1938, the Jews of the Reich had been subjected to ever-increasing levels of degradation, humiliation, and discrimination for years. Obviously, they all noticed it, just as the frog in my classroom must surely have noticed the water warming as well. What the frog failed to grasp was that there was going to be a specific moment at which his ability to hop out of the dish was going to be gone and that he would have no choice but to meet his fate in that place. And that is what the Jews who had bravely decided to weather the storm in place also failed to seize until it finally was too late to do otherwise and their fates were sealed, their doom all but assured.
Is Pittsburgh that line in the sand that we will all eventually see clearly for what it was? Or was it just a terrible thing that an awful person with some powerful guns managed to accomplish before he was finally subdued by the police? The answer to those questions lies behind the answers to others, however. Was Pittsburgh more about the rise of the so-called alt-right than about anti-Semitism per se? (The Anti-Defamation League noted that there was almost a 60% rise in hate crimes directed against Jews or Jewish targets from 2016, the year of the presidential election, to 2017, the year of Charlottesville. No one doubts that the statistics for 2018 will be higher still.) Or is this more about guns than Jews?  We have become almost used to gun violence in our country—we actually name the incidents (Columbine, Orlando, Sandy Hook, Parkland, Fort Hood, San Bernardino, etc.) because it would otherwise be impossible to keep track of them all—so it feels possible to explain Pittsburgh (or rather, to explain it away) as just one more notch on that belt rather than as a decisive moment in American Jewish history. But is that reasonable? Or is Pittsburgh less about Jews or guns, and more about the way that houses of worship seem specifically to enrage a certain kind of American bigot, the kind who can spend an hour studying Bible with gentle, harmless church folk and then take out a gun and methodically attempt to kill all the others in the class?

Or is this something else entirely? That’s the question I found churning and roiling within as I contemplate the events of last Saturday in Pittsburgh and try to make some sense out of it all.
It’s interesting how the most accessible studies of anti-Semitism—Léon Poliakov’s The History of Anti-Semitism, Edward Flannery’s The Anguish of the Jews, David Nirenberg’s Anti-Judaism: The Western Tradition, Bernard Lazare’s Antisemitism, Its History and Causes, Rosemary Ruether’s Faith and Fratricide, and Daniel Jonah Goldhagen’s The Devil that Never Dies, just to name the books I personally have found the most rewarding and informative over the years—it’s interesting how little read or discussed these books are, including specifically by the very Jewish people who should constitute their most enthusiastic audience. Is that just because they are incredibly upsetting? Or is there a deeper kind of denial at work here, one rooted in a need to feel secure so intense that it simply overwhelms anything that might disturb people who live in its almost irresistible thrall?

I was a senior in college when I first read André Schwarz-Bart’s, The Last of the Just. It is one of the few works of fiction I’ve read many times, both in French and English, and is surely among the most important works of fiction I’ve read in terms of the effect it had on me personally in terms of shaping my worldview. (It also led, albeit circuitously, to my choice of a career in the rabbinate.) The book, in which are depicted episodes from the life of one single Jewish family from 1190 (the year of a horrific pogrom in York, England) to 1943 (when the family’s last living scion is murdered at Auschwitz), is upsetting. But it is also ennobling and, in a dark way that even I can’t explain entirely clearly (including not to myself), as inspiring as it is disconcerting. It was once a famous book—the first Shoah-based book to be an international bestseller and the winner of the very prestigious Prix Goncourt in 1959—but has fallen off the reading list of most today: how many young people have even heard of it, let alone have actually read it? I suppose people still read Anne Frank’s diary and Elie Wiesel’s Night, the two most prominent books about the Shoah of all…but both books are tied to their author’s specific stories and neither is “about” anti-Semitism itself in the way Schwarz-Bart’s book is. In my opinion, that’s why they have remained popular—because they’re basically about terrible things that happened to other people—and The Last of the Just hasn’t.
What should we do in the wake of the Pittsburgh massacre? Clearly, we need to find the courage to speak out and to say vocally and very strongly to our elected officials that we cannot and will never accept that this kind of thing simply cannot be prevented in a society that guarantees its citizens the right to bear arms. And, just as clearly, we need to make it clear to the world that this kind of aggression, far from weakening us, actually strengthens us and helps us find the courage to assume our rightful place in the American mosaic. But we also need to lose our inhibitions about learning about our own history. Pittsburgh was about the recrudescence of the kind of anti-Semitic violence many of us thought to be well in the past. To understand the deeper implications of Jews at prayer being murdered in their own synagogue, we don’t need to read any of the million statements issued by public officials, Jewish and non-Jewish organizations, and countless individuals over the last few days. What we do need to read is Schwarz-Bart and Ruether, Nirenberg and Flannery, and to internalize the lessons presented there. And we need take the temperature of the water in our petri dish and only then to negotiate the future from a position of informed strength characterized by a clear-eyed understanding of what it means to be a Jew in the actual world in which we live.

Thursday, October 25, 2018

Machines and People


Art is the medium that allows an artist to communicate something profound and meaningful to his or her audience in a way that does not merely inform but truly inspires...and which also allows the artist to transcend the brevity of human life to speak not solely to contemporaries, but to countless future generations as well.  When put that way, the underlying concept sounds fairly abstruse. But when considered in the context of real life, it feels almost natural: when we sit in the audience and watch King Lear talking to his daughters on stage as the curtain goes up and the play begins, it's not at all difficult to understand that it's only him talking to them in a certain sense, but—and far more profoundly—it’s really the playwright talking to us. Indeed, the difference between a great artist and a hack lies precisely in his or her ability to communicate deeply and movingly with an audience in a way that merely telling them that same information would not even slightly accomplish: what we learn in a few minutes of King Lear about parent-child relations and the degree to which greed can poison even the most natural kind of love couldn’t possibly ever be conveyed as deeply or as effectively by even the most talented university lecturer giving a public talk about the ins and outs of childrearing. Or about the nature of love. Or about greed.

That all being the case, art requires three things (or feels as though it must): an artist, an audience, and an artistic medium of some sort. The first and the second absent the third is just two (or more) people standing in a room. The first and third absent the second is the artistic version of a tree falling in a forest with no ear drum present to vibrate sympathetically when the tree hits the earth. The second and third is, at best, unrealized potential, a batter at the plate and a ball resting on the pitcher’s mound…but no pitcher in sight actually to throw the ball and, as such, no game to watch and either to enjoy or not to enjoy. And, of course, also no winner or loser.
So that’s two living, breathing people and one artistic medium that feel requisite. But now that we live in a new world in which machines can think—if not quite in the way human begins do, then at least to an extent that even a quarter century ago would have been unimaginable—the time may have come to revisit that those requirements.
Take, for example, these eyes:


They are expressive, thoughtful, fully human. It is a man or a woman? Is that the hint of a moustache under his nose or just a shadow? These eyes suggest a certain sadness to me, a certain world-weariness born of insight into the way that people are so often their own worst enemies. Without being able to see the rest of the face, this person seems to exist outside of time. If the rest of the picture depicted him or her dressed like an Italian aristocrat of the sixteenth century, I could believe it. But if the rest of the picture portrayed him as a cowboy or her as an astronaut, I could believe that too.
Here’s the rest:


So, not a cowboy or a doge, but a Dutchman. And this, I can hear you thinking, must surely be a work of Rembrandt, the greatest of all portrait painters and (of course) a Dutchman himself. But this painting is neither a Rembrandt nor a work by any of his contemporaries or students. It was created by a 3-D Printer that was programed over the course of an eighteen-month experiment by a team of art historians, computer scientists, and engineers brought together by Microsoft, the Delft University of Technology in Holland, and two Dutch art museums, the Mauritshuis in The Hague and the Rembrandt House Museum in Amsterdam. Bringing together digital data culled from 346 of Rembrandt’s real paintings created between 1632 and 1642, the idea was to create a portrait of a man not only dressed in the style of the time and with facial features similar to the men in Rembrandt’s real paintings, but to use the finest gradations of shading, texture, perspective, brush usage, pigmentation, and lighting to create a new portrait, one of no one at all but that surely feels as though it could be of someone whom Rembrandt could easily have known.
Is that art? It’s hard to say. The work has an audience and it exists…but does it have an artist? Clearly, a 3-D printer is not an artist, just a machine that does its programmers’ bidding. But are its programmers then the artists? I want to say no, that this project was just some digital silliness dreamt up by people because they had the technical skill to pull it off. But then I look again at the man’s eyes…and I feel a certain sense of kinship with this non-man who never existed. Does that make me a crazy person? Or does that make this a work of art?

Christie’s is about to auction off a portrait called “Edmond de Belamy, from La Famille de Belamy,” a work created by an algorithm (whatever that means exactly) and thus a product solely of its machine-creator’s artificial intelligence. The bidding is going to start at $10,000. The creators, if that’s the right word (since they specifically did not create the painting), are a trio of French businessmen with degrees in business and computer technology who call themselves Obvious. No artistic implement was used to create the picture—no pencils, no paints, and no drawing tools of any sort. Nor was human creativity involved other than tangentially: what the members of Obvious did, almost simply, was to feed thousands of portraits from the 14th to the 20th centuries into a computer that had been programmed to analyze the images in a dozen different ways and then attempt to mimic them as best it could. And here is, so to speak, Edmond de Bellamy himsel
f:


Is this art? Most of me still wants to say no. But I find myself unexpectedly unsure as I look carefully at the painting and allow it to speak to me in precisely the way great works of art communicate outside of language and without being themselves animate.
I saw Her, Spike Jonze’s 2013 movie, and came away unconvinced that a man could truly love a machine, even one possessed of as intelligent and enticing an operating system as the one whose voice in the movie is Scarlett Johansson’s. Machines are not people. They cannot love. They cannot reproduce. But can they create? That is the question the portraits pasted in above awakens in me.
These questions lead to others. Can machines make music? Can they write books? Can they make scientific discoveries other than by processing huge amounts of data that their human masters have programmed into them? All these views have their proponents. Listen, for example, to Drew Silverstein, the CEO and co-founder of Amper, a company eponymously named after its sole product, an artificial-intelligence music composer.  Touted as the ultimate in artificial creativity, the program, so claims its founder, can create “unique, professional music tailored to any context in seconds” once you’ve provided it with the style of music you wish it to create, the mood you’d like to convey, and the length of the piece of music you wish to end up with. It’s beyond impressive. (To hear the whole spiel, click here.) And the product is certainly something like music. Maybe even it is music…at least in the sense that what they market as “cheese food” is some version of cheese. But what it lacks is the inner quality that, at least for me, defines what music—and what art itself—is: the ability to transcend the temporal and physical boundaries of the universe to communicate deeply moving ideas and emotions through the medium of human creativity. And that is what is lacking in all of the above. If there is no human artist, then there simply is no one for me to commune with through the medium of his or her art, no one to speak to me either deeply or superficially. Or at all. And without that psychic bridge between one human heart and another, all that’s left is technique and content.
Coming closer to my own turf, I find myself wondering if machines can write books. You may recall reading in George Orwell’s 1984 about a world in which the “proles” of a dystopian future solely read books written by machines. You may also be aware that amazon.com features over 10,000 books by one Phillip Parker, each of which is computer-generated and so, at least in some sense, “written” by a machine—but those books are merely compendia of facts and data, so hardly literary works other than in the sense that tax returns are or that telephone books would be if there still was any such thing. But other efforts are more intriguing. A Russian computer scientist, Alexander Prokopovitch, programmed a computer to produce his (or do I mean, its) 2008 novel, TrueLove, an attempt to tell the story of Tolstoy’s Anna Karenina in the style of one of my own favorite authors, Haruki Murakami. It was, however, not deemed a particularly successful undertaking and is no longer in print. (For a fairly dismal appraisal of Prokopovich’s efforts, click here.) Others will do better, I’m sure: to teach a computer to produce a text that retells a story that it has been programmed to regurgitate on command using a specific set of literary quirks and tendencies it has also been programmed to bring to bear in its effort to recast the story in different words doesn’t sound anywhere near impossible. But we’re back to the tree in the forest: if there is no beating heart inside an actual human breast with which I am being invited personally to commune through the medium of that person’s art, then there is—at best—a document, a story, or a book…but not literature. An image but not a painting. Sound, but not music. 
The bottom line, at least for me, is that art should be defined first and foremost as a mode of communication, as a way for two souls to meet even if their possessors never will or even could. If there is no other person involved, then even the most sophisticated effort to mimic art is just so much unrealized potential. Art, like love, requires two.