Thursday, January 26, 2012

Watching Downton Abbey


Like many of you, I’m sure, Joan and I have been watching Downton Abbey, the latest BBC “Upstairs, Downstairs”-style melodrama set among incredibly wealthy (and even more incredibly overdressed) Brits who are in many ways (although clearly not in all) closer to their servants than to each other. But it’s not the whole dressing-for-dinner thing I want to write about here—although it would be interesting in its own right, and not especially flattering to ourselves, to compare these people’s dining habits with our own—but the backdrop to the entire story to date, which is the First World War. At first, the war is safely distant from Downton Abbey: it is being fought “over there,” wherever “there” is at any given moment. Mostly, the fighting is in France. And it’s the kind of thing the lower classes support by putting their sons’ lives on the line, while the upper classes hold endless charity balls to support “our boys.” (No one seems ever to notice that “our” boys aren’t their boys at all for the great most part, but their servants’ sons and their less-well-off neighbors'.) But then, as the years pass, the upper classes do become involved in the real way that people become involved in war, and a good deal of the drama in this second season involves the tension that inevitably evolves from the realization that the rigid class system Britain has self-servingly set aside to allow young men of all classes and backgrounds to die side by side in the trenches is not easily going to snap back into place at the cessation of hostilities, that something permanent has changed in British society. And, of course, it is equally obvious that no one is entirely sure what that change will bring in its wake or whether that kind of massive shift in societal norms will turn out to be a golden opportunity for a nation to advance forward into a new future or a Pandora’s box that only looks like a treasure chest before anyone figures out how actually to open it.

Ken Follett’s latest book, Fall of Giants, is built around the same set of themes and also has the First World War as its background. I’ve read all of Follett’s books, I believe, and I look forward to the next installment in the trilogy of which Fall of Giants is only the first volume. But that single notion that animates both Downton Abbey and Follett’s book—that the First World War was one of those watershed events in history that altered society so completely that what followed was in many ways a complete break with what came before—is what interests me in both works. I recommend Follett’s book to you all. It’s clever, engaging…and, even at a cool thousand pages, a brisk read. I’m not sure when the second book will be out, but I’ll read it as soon as it appears and tell you all what I think.

The First World War, the backdrop for both works, is in many ways the forgotten war of our time. Both my parents were born while it was raging, yet in our home—as, I’m almost certain, in yours as well—“the” war without any further qualification meant the Second World War, not the First. In some weird way, the angel of death passed over my family in this regard: my father was an infant when the U.S. entered the war in 1917 and my grandfathers were in their thirties, so no one was called up and none of my relatives was among the 116,708 Americans who died during the conflict. But that’s only my own personal story…and the numbers themselves are shocking even by the standards of the Second World War.

Consider, for example, that on July 1, 1916, the first day of the Battle of the Somme, Britain lost almost 60,000 men. That’s more soldiers than America lost in the entire Vietnam conflict. This is not a number to pass quickly by: imagine, if you can, that the entire Vietnam War took place on a single day—except that even more soldiers died—and then, when it was all over, the sun set, then rose the following morning, and the battle simply recommenced. By the time it was all over, there were over 627,000 dead on the side of the Allies. The Germans and their allies lost about 456,000, but not the one that might have made all the difference: the young Adolf Hitler, serving with the 6th Bavarian Reserve Division, survived the battle after being shot in the leg. Together, that makes well over a million casualties. What’s stunning about the Battle of the Somme, though, is not only the almost unimaginable carnage, the uncountable dead. What’s amazing is that after months and months of fighting—the battle began on July 1 and ended in mid-November—the entire result gained by all that horrific loss of life was that the German line was pushed back about forty miles towards the east. It doesn’t sound like much. It isn’t much. Obviously, the men fighting the battle could not have known that, when it was all over, nothing at all would have been accomplished that even the most sympathetic military historian could possibly consider to have been worth the loss of life. There were endless acts of bravery, of selfless sacrifice. But it should be possible for people like ourselves looking back almost a century later to honor the sacrifice of the dead without losing sight of the fact that it was, in the end, a draw. No one won anything too much. Some strategic shifts were recorded. The Battle of the Somme was the first battle in which tanks played a major role; the tank itself was regarded by the British as a secret weapon that it was hoped (apparently incorrectly) would prove decisive in overrunning the enemy’s trenches. But in the end…it came down to a retreat for the Germans of about forty miles and a concomitant advance for the Allies of about seven miles, a distance once famously set against the number of people who died at the Somme to yield the conclusion that every single centimeter of advance cost the lives of two young soldiers.

But what is true of the Somme is true of the First World War itself. The losses were staggering. On the Allies’ side, about 5,525,000 people were killed and almost 13,000,000 wounded. On the side of the Central Powers, there were over four million dead and well over eight million wounded. The number to consider, though, is this: on both sides, when the guns of August finally fell silent, the total number of people killed, wounded, or missing in action came to about 38,880,500. It’s a staggering number. And what did those people die for? That, actually, is the real question I would like to put forward.

I can remember sitting in Mrs. Gore’s American History class in eleventh grade in Forest Hills High School and, as I listened to her teach about the First World War, thinking even then that I had no idea what she was talking about, that she kept falling back on explaining how the war got started—the assassination of the Archduke Ferdinand in June of 1914 was the trigger, as we all learned somewhere along the way—without explaining how exactly someone shooting someone else could possibly have led to the deaths of almost ten million people. Worried about the Regents Exam in American History I would have to take in June (I was that kind of student), I duly memorized all the alliances and intricate treaties that brought nation after nation into the conflict. Barbara Tuchman’s great book, The Guns of August, had just been published a few years earlier—I started eleventh grade in 1968—and I remember reading it with great interest and still being unable to understand what the whole thing was about. One of the young men depicted in Downton Abbey as going off to war says that he’s willing to put his life on the line because he truly believes in the cause, by which he presumably means that he believes in it enough to risk his life in its service. But what exactly was the cause? He didn’t say. And that’s the question I’ve never been able to answer for myself with any degree of satisfaction.

As far as I can see, the war ended and everybody went home. There was no invasion of Germany. The survivors picked up their lives. France went back to being a country, not the world’s battlefield. The Kaiser was deposed and Germany became a republic. In the end, though, I think it could be reasonably said that this massive paroxysm of inexplicable violence that cost millions upon millions of people their lives led to some political changes in some of the participating countries, to some social changes in others, and to almost nothing meaningful in still other participating countries (like our own, for example). Yet the bitter legacy of defeat in Germany led to the rise of Nazism, which eventually plunged the world into an even more violent war, one that cost the lives of over 24,000,000 soldiers and another fifty million civilians. The numbers are beyond staggering. If there hadn’t been a First World War, would there have been a Second? That’s the question to ponder—and I say that both as an American and also as a Jew who has not known a single day since adolescence not at least partially devoted to contemplating the legacy of the Shoah. One could certainly make a convincing case that the Germans would never have embraced Nazism had their economy not been on the verge of collapse, a disaster that was a direct result of their defeat in the Great War. Is it pointless to ponder these questions now? It isn’t if such ruminative thought inspires us to consider how impossible it is to know the consequences of our actions—and I’m thinking here principally on the national level—in advance. Could Woodrow Wilson sailing to Europe to attend the Paris Peace Conference in 1919 (for which effort he was awarded the Nobel Peace Prize) have imagined Treblinka in his worst nightmares? I feel certain that he could not have even begun to imagine the horror that the world would experience within a few short decades. Nor, therefore, could he have imagined the consequences of being party to a treaty that insisted that Germany alone was to blame for the war, a contention that a historian like Wilson should certainly have understood was only slightly true, or that the ultimate price of paying for the incalculable devastation should never have been Germany’s alone.

In the end, no one can see the future. But as we move forward as a nation, we should have the lesson of the First World War always in our hearts. Fought for no reason anyone has ever been able to identify clearly to me, it led, circuitously but eventually, to horrors that would have previously been considered unimaginable. Actions have consequences. We just can’t ever quite know them in advance. That thought need not paralyze us, however. Instead, it should instill in us a deep obligation to consider our actions as carefully as possible before going off on adventures that could conceivably lead us to places we haven’t even thought yet of going or having to go.

Thursday, January 19, 2012

The Singularity Strikes Back

Wednesday was an odd day. I opened up my computer and tried to look something up on Wikipedia—I needed to know whether Rabbi Isaac Alfasi, the greatest rabbi of eleventh century Morocco and Spain, was born in 1003 or 1013—and it wasn’t there. This was very serious. How could it not be there? It’s always there. Obviously, I understand the universe suffers various glitches and hitches on a daily basis. But that Wikipedia, the source of all knowledge, could somehow be absent felt far too disorienting to be waved away as a mere hiccup in the matrix. But where was it? I looked again…and this time I noticed that the screen was more black than blank. And then, as the words formed on my screen, I finally got the point: Wikipedia had intentionally gone dark for a day to protest a bill currently before Congress. Later in the day, I tried to google some information that I would ordinarily have looked up in Wikipedia and found that google was, if not actually dead, than at least in mourning: it was wearing a black band intended, so I soon learned, to indicate its deep sadness regarding the same bill that Wikipedia temporarily stopped existing to protest.

I have to say that they both got my attention. The legislation the powerhouses of the ‘net have gone to war against is actually two different bills, the one before the House called SOPA, which stands for the Stop Online Piracy Act, and the one before the Senate called the Protect I.P. Act. At first, it seemed like an odd thing to fight: surely the masters of Wikipedia and Google aren’t in favor of online piracy! But the more I read, the more I began to understand about what this is all about. Just a few weeks ago, these bills were considered obscure no-brainers, the kind that Congress would pass easily because it was hard to imagine why anyone would vote against them. The big “old school” entertainment industry powerhouses, led in part by the Walt Disney Company, were strongly in favor of legislation being enacted that would protect them from off-shore internet-based companies that distributed their private property for free or almost for free to their own clients via the internet. And, indeed, this is what really happens out there: companies that produce movies, books, music, even television shows, in the United States and sell them via the traditional media outlets to consumers have suddenly found themselves facing competition, if that’s the right word for it, from internet companies located well beyond the normal reach of American justice, who simply post the material for free (or not for free) on their websites without paying anything at all for the right, let alone royalties to the actual owners of the purloined property.

That part, I know all about. If you surf around the ‘net for long enough, you can easily find sites featuring Hollywood movies for download that are still in the theaters here. I found a site the other day quite accidentally that was offering a download of over 1000 books for my Kindle, including (the ad said) the entire New York Times bestseller list, for a fee less than the cost of almost any one of those books on the Times’ list. Nor is this only about intellectual property rights: the Motion Picture Association of America estimated the other day that this kind of off-shore piracy costs the American economy something like 100,000 jobs per year.
Clearly, the movie industry is still vastly profitable. But that’s hardly the point: the fact that a company is doing well doesn’t grant competitors the right to steal its product and sell it themselves, let alone give it away.

On the other hand, the government isn’t entirely innocent of wrongdoing: it has long conspired with readers to deny authors their royalties by buying up popular books and then simply letting members of the public read them for free via the public library system. (In other countries, such as the U.K. or Canada, authors are compensated for lost revenue due to library purchases, but not in the United States.) Still, that the government, even if its own hands aren’t entirely clean, wishes to address the problem of on-line piracy, who could be opposed?

Apparently a lot of people! The way to deal with these on-line pirates, apparently, is to make them undiscoverable by keeping them from appearing in the results when people use the most popular search engines to search for things like “free movies” or “free books,” or things like that. Instead, the search engines could then be made to direct people looking, say, to download a book or a movie, to the actual owner of the property rather than to a rogue operation that will sell what it does not actually own for a fee of its own devising. On top of that, the bills propose that the owners of intellectual property could then have payment companies like Visa or MasterCard refuse to do business with companies that are accused of internet piracy. The problem with all of this is that it means that that internet will become just a bit less like the Wild West and just a bit more like an actual place of business. Censorship is also part of the discussion: the single issue that appears to have aroused most opponents of both bills to action is the concept of censoring the ‘net in a way that could throttle its creativity by making it more like an old-style bookstore, albeit one that doesn’t exist in real space, and less like a freewheeling circus of creativity, artistry, and industry.

I myself see the issue from both sides. Clearly, I am not in favor of people stealing other people’s work and selling it as though they owned it themselves. But I also do believe that when the history—or at least the intellectual history—of human civilization is finally written one day by some thoughtful scholar from a different planet, the invention of the ‘net will take its place next to the invention of moveable type as one of the quantum leaps forward humanity will have made on its way to wherever it is humanity is going. Even though I wrote my dissertation by hand and then typed it on an IBM Selectric typewriter (which unimaginable task involved typing at least a hundred pages of Hebrew backwards since the typing ball only moved from left to right and couldn’t be told otherwise), I cannot imagine what it would mean to be an author today without the internet to rely on for information of every imaginable variety. Keeping people from stealing from other people is an excellent idea, therefore. But I also see the point of not regulating the ‘net out of existence, and part of its culture, as evolved to date at any rate, has to do with the freedom to access unparalleled amounts of information precisely because someone else has already posted that information…somewhere. So the challenge is to cure the disease without killing the patient. In this context, what that means is that Congress is entirely right to want to stop piracy. Who isn’t opposed to that? But we still have to move ahead carefully and judiciously so that the culture of the ‘net itself is enhanced, not degraded, by the government’s efforts to prevent theft.

The traditional Jewish attitude towards intellectual property is that it can neither be owned nor stolen. Ein baalut l’chokhmah is the quasi-traditional formulation: “Knowledge cannot be owned.” What that meant traditionally was that ideas couldn’t be considered personal property. As a result, stealing a book is a crime in a way that stealing an idea found in that book (i.e., and passing it off as your own) simply is not. Furthermore, in classical Jewish law, although all theft is considered wrong, only consequential theft is considered an actionable offense: stealing someone’s used coffee grounds is perhaps morally wrong (in that all theft is wrong), but in that the purloined substance is deemed to have no commercial value it is not considered an actual crime for which someone could be tried and convicted. That was then, however, and this is now: one of the challenges Jewish jurists are going to have to face, and have already begun to face, as we move further into the information age is how to revise laws that were developed in a setting in which the notion of knowledge without a physical medium was limited to the thoughts in one’s head. How the ancient rabbis would have responded to a world in which it is possible to encode thousands of books on a single disk, or in which it is possible for the iTunes store to have sold ten billion songs without any medium at all—not records, not CDs, not tapes, not any real thing at all, just the music as it exists in some binary code that has no discernible physical reality at all—how they would have processed that thought is impossible to imagine.

Or rather, not so impossible: they would have supposed whomever was telling them about iTunes was kidding. Then, when they realized it was true, they would have been looking around for the mashiach. Even to me, much of what we routinely do today seems almost unbelievable. (I subscribe, for example, to an on-line electronic Jewish library that makes available to me a full rabbinic library that would take tens of thousands of dollars to acquire, supposing you could acquire all the volumes present, but which is available to me with no physical medium at all for all of $50 a year.) So the googloids are right: we have something indescribably precious in our hands and we mustn’t strangle it. But surely that doesn’t mean letting thieves steal as they will either. My sense is that the Wild West wasn’t anywhere near as much fun as it seems to have been in the movies. Especially, if you were the one getting shook down by the bad guys before the sheriff and his posse managed to arrive….

Thursday, January 12, 2012

Ministerial Exceptions

Like many of you, I’m sure, I’ve been following the latest round of Supreme Court deliberations regarding the so-called ministerial exception with great—and far greater than merely professional—interest. And, also I’m sure like many of you, I find myself of two minds with respect to the issue itself and particular with respect to the decision handed down yesterday in the case known by the unwieldy name of Hosanna-Tabor Church v. Equal Opportunity Employment Commission, a decision that has been hailed by some as the most important decision the court has handed down with respect to the independence of religious institutions in several decades. Whether that is true or not, I don’t feel qualified to say. But I find myself very engaged by the issue itself. And I also find myself on both sides of the fence, equally able to argue for and against the reasonability of the court’s uncharacteristically unanimous decision.

The concept of the ministerial exception is simple enough to understand and refers to the de facto right of a religious organization like a church or synagogue to choose its spiritual leadership without regard to the laws that forbid different kinds of discrimination in the workplace. Thus, for example, although gender-based discrimination is forbidden by law in most contexts, the government does not interfere in the hiring practices of churches that only permit men to serve as clergy. Clearly, though, there have to be limits to such a hands-off policy and I believe that all, or surely most, Americans would agree that the litmus test for the reasonability of the government refusing to become involved in the inner workings of a religious organization with respect to hiring policy should be whether the policy in question is rooted in actual church doctrine or is “just” discriminatory in nature. It is also important to remember in this regard that the exception, despite its name, does not only apply to the hiring of clergy, but to the way religious organizations conduct themselves in general. Here too, though, the litmus test has to be whether the discriminatory act in question is rooted in actual dogma or not. To give a simple example, the government does not and, I believe, should not object when an Orthodox synagogue insists that women sit in a segregated, cordoned-off area of the sanctuary even though gender-based discrimination in public settings is generally illegal. But if that same synagogue were to establish a similar restricted seating area for people of a specific race or for physically handicapped people, then I believe, as I’m sure most of my readers would agree, then the government should absolutely step in to prevent overtly discriminatory practices that have no specific basis in religious dogma and are, therefore, correctly to be labeled as “just” discriminatory.

And so we come to this week’s Supreme Court decision. The case involved a woman named Cheryl Perich, who worked as a teacher in a Lutheran school in Redford, Michigan. Ms. Perich suffers from narcolepsy, a chronic sleep disorder, and was in the midst of pursuing an employment discrimination claim when she was fired from her job not for being a narcoleptic or for being an incompetent teacher, but for pursuing her dispute with the Missouri Synod (which is the second-largest Lutheran denomination in the United States) through legal channels rather than within the church. In other words, the church fired her not because of any reason connected with her suitability as a teacher or her ability to teach, but because she violated church doctrine by seeking redress in the courts rather than in the church for what she perceived as wrongs committed against her. And this is where I find myself veering off in two different directions and uncertain which path is the one along which I prefer to travel. In the real forest, obviously, when you get to a fork in the road you can only take forward one of the two paths that lay open before you. That should probably be how things are in the world of ideas as well…and yet I find myself occasionally more than able to go off—and, at that, with a sense of certainty that I’ve chosen the right path—in two different directions at once. This is one of those moments.

Surely, American citizens have the right to expect the justice system to protect their rights. Isn’t that the whole point of there being a civil justice system, so that citizens can have recourse to it when they feel that they are being treated unfairly or unjustly, or that they are being discriminated against unreasonably? I think we’d all agree easily that that is precisely why the system exists. Yet the justices of the Supreme Court, all nine of them speaking in one voice, determined that Ms. Perich has no right to contest her dismissal, that the Lutheran Church has the absolute right to fire whom it wishes for non-compliance with church doctrine even when the doctrine in question is more in the category of church policy and does not appear to have any overt relation to spiritual dogma or to religious belief. Making the matter more curious is the fact that Ms. Perich is neither an ordained clergyperson nor a fulltime teacher of religion. In fact, she only taught religion for forty-five minutes out of an entire day’s worth of teaching assignments. Yet that alone—that single class she taught in religion—was enough for the court to determine that her employer could bring the ministerial exception to bear and fire her without having to answer for actions that, were they undertaken by any other kind of employer, would indisputably be taken as an open infringement of an employee’s basic civil rights.

Confusing the issue is the fact that the Court’s decision seems to imply that you don’t have to be a minister to be a minister. Justice Clarence Thomas wrote, for example, that the question “whether an employee is a minister is itself religious in nature, and the answer will vary widely.” The fact that Cheryl Perich is specifically not a minister, and is not considered by anyone at all (including the Lutheran Church) to be a minister, is not enough for the church’s right to dismiss her not to be protected by the ministerial exception: the church apparently has the right, so Justice Thomas, to determine on its own that somebody is a kind of a minister even though that person lacks the title, ordination, training, education, or calling to serve in that capacity. Chief Justice Roberts clearly spoke for the majority when he summarized the decision as follows: “The interest of society in the enforcement of employment discrimination statutes is undoubtedly important. But so, too, is the interest of religious groups in choosing who will preach their beliefs, teach their faith and carry out their mission.” In other words, as soon as an employee is recognized as being involved in teaching or preaching religion, the employer—supposing the employer is a religious institution and not, say, a university with a Religious Studies department—is no longer bound by the standard laws that forbid discrimination in the workplace.

And that is where I part company with the court. On the one hand, the nature of religion requires that some of our normal disinclination to permit discriminatory practices be set aside: there is nothing wrong at all with permitting synagogues to decline to accept non-Jews as members or for allowing Lutheran churches to offer membership only to Lutherans. That is discrimination, of course, but it is to my mind rational discrimination—and nothing at all like a country club or a tennis club refusing to admit as members people of the “wrong” religion or race. And I certainly agree that the ministerial exception should apply to the hiring of ministers: for a Conservative synagogue to say that it is looking for a rabbi, but is not interested in meeting candidates other than ones ordained by Conservative institutions, seems rational to me as well. But I lose my momentum as I try to travel as far down this road as the Supreme Court went the other day: once you say that a teacher can be fired for going to court to seek redress for a wrong she believed to have been committed against her merely because she teaches one class a day on religion and the church teaches (more than just a bit self-servingly) that disputes between employees and the church, including ones that have nothing at all to do with religion, must be resolved in-house rather than in the courts—that already seems to me to be a clear infringement on that teacher’s basic civil rights.

The responses to the decision were as you probably can imagine. All sorts of religious groups, including the Orthodox Union, which joined together with several Christian groups including the United States Conference of Catholic Bishops and the Mormon Church, were delighted. I am less delighted. On the one hand, freedom of religion is one of the foundation stones upon which American democracy rests. The First Amendment to the Constitution rightly prohibits the government from becoming involved in the inner workings of religious institutions or from establishing a state religion. The ministerial exception that permits synagogues and churches, and all houses of worship, from hiring the clergy they wish without respect to anti-discrimination legislation, however, is only reasonable as long as it is applied judiciously and in a way that seeks to find a rational balance between the right of religious institutions to self-govern and the rights of individual citizens not to have to endure discriminatory hiring practices. I’m a rabbi, not a lawyer or a professor of law. But why Cheryl Perich should be barred from seeking the assistance of the courts in a matter wholly unrelated to religious dogma, I can’t quite understand. The rabbi in me loves the concept of synagogues being free to function as they see fit without reference to those irritating rules society imposes on other societies and businesses. But the citizen in me can’t quite understand how it can be good for society for institutions that self-define as godly and which profess to promote the finest civil and moral virtues to be exempt from the very rules society has put in place to prevent the powerful from oppressing the weak, thus to ensure that society functions fairly and justly for all.

Thursday, January 5, 2012

The Tipping Point


The notion that there is something sacred, not merely practical, about the concept of k’lal yisrael—the indivisible unity of the Jewish people—is a very old idea, one that goes back at least as far as the poet whose words became the central part of the version of the Amidah we recite on Shabbat afternoons, the part that suggests that the unity of the people below is intended specifically to reflect the indivisible, undifferentiated one-ness of God on high. The idea is thus that the Jewish people, at least in theory, proclaims the unity of God not merely by declaiming the Shema twice daily but actually by modeling it in terms of the way they conduct their affairs—the way we conduct our affairs—in such a way so as to make of many Jewish people one nation, one people, one extended family.

That’s the theory, at any rate. (The expression k’lal yisrael, literally “the collective entity of Israel,” appears once in the Talmud, but without any clear theological overtone simply as an expression denoting the Jewish nation in general, as opposed to any subgroup within the people. The more usual rabbinic term for what moderns seem to want to call klal yisrael is k’nesset yisrael, literally “the congregation of Israel,” what Solomon Schechter slightly infelicitously called “catholic Israel.” Why modern usage has chosen to favor the former term over the latter, I’m not sure. Nor is it correct to say that the expression k’nesset yisrael has fallen entirely into desuetude: I noticed it just the other day in an essay by Rabbi Jonathan Sacks, chief Orthodox rabbi of Great Britain, where he wrote: “The subject of covenantal promises is not the sub-community of pious Jews but k’nesset yisrael, the collective entity of the people of Israel….”) But whatever we call it, the concept that part of being a faithful member of the House of Israel means accepting as brethren all our co-religionists, including both those with whom we disagree about details as well as those whose entire approach to religion we find intellectually indefensible and morally off-putting.

It’s an easy sermon to preach. It’s distinctly harder and more challenging actually to take the concept to heart and truly to believe in it. Indeed, the events of the last few weeks in Beit Shemesh and elsewhere in Israel have pushed me almost to the point at which I wonder if any normal person can truly embrace the notion of k’lal yisrael without descending so far into the realm of the absurd so as to make the entire undertaking meaningless. Have you been following the story? It’s beyond upsetting. But it’s also important to take seriously and thoughtfully. As someone who has recited the words cited above as part of his afternoon prayers every Shabbat for almost four decades, I have a lot invested in the concept. And so, I think, should we all.

The story begins with a little girl, one Naama Margolese, age eight. Little Naama, an olah from the U.S., has the misfortune to live in Beit Shemesh, a bastion of haredi Jews, where she has been the subject of abuse, including being spat at, cursed at, and having the word “whore” screamed at her…for walking to school dressed like a normal Israeli child, i.e., absent the trappings of ultra-Orthodoxy her neighbors favor for themselves and their daughters. The incident caught the attention of the Israeli public and became a kind of flashpoint for the anger average Israelis feel towards the kind of deeply misogynistic, anti-everybody-but-us policies that have become regular fare in many cities in Israel. When police in Beit Shemesh, for example, attempted to remove posters the Haredim had put up demanding that women not walk on the same sidewalks as men, they were taunted and threatened by people who openly and shamelessly called them Nazis, a term that has become increasingly devalued, it seems, with each successive week of catcalls and insults hurled at Israeli policemen attempting merely to do their jobs.

Thousands of Israelis attended a demonstration in Beit Shemesh, a suburban town to the west of Jerusalem, in an attempt publicly to oppose the kind of extremism we in the Diaspora tend to view as peculiar but essentially benign—why else would so many of us decorate our homes with pictures of happy haredim dancing around in merry circles?—but which Israelis are beginning to understand as a true threat not only to the reasonableness of maintaining faith in the concept of k’lal yisrael, but to the social fabric of the Israeli state itself.

One thing appears to have led to another. Increasingly, Israeli women have refused to move to the back of the bus when haredi men have ordered them to do so. (That kind of gender-based segregation in public transportation is formally illegal, but has been the de facto rule for years on certain bus lines, apparently, that pass through haredi neighborhoods. It is the de facto reality that people are no longer prepared meekly to accept.) One Tanya Rosenblit, now being hailed as the Rosa Parks of Israel, began a kind of chain reaction a few weeks ago simply by refusing to give up her seat on the 451 bus traveling from Ashdod to Jerusalem merely because her presence in the front of the bus was upsetting to some of the male passengers. Then, on December 28, a haredi man on a bus in Jerusalem insisted that a female IDF soldier move to the back of the bus. When she refused to budge, she was subjected to verbal abuse so vile and extreme that the police arrested the man who was harassing her and charged him not only with misconduct on a public conveyance, but actually with sexual harassment. The tensions between the haredim—the extreme Orthodox who make up 10% of the Israeli population and 14% of the Israeli Jewish population—merely escalated from there, with violence and hysteria characterizing both sides: there have been several incidents reported in which haredi children were intimidated, insulted, and assaulted by people opposed to their parents’ extremism.

A new low, however, was reached on New Year’s Eve, when a huge demonstration of haredim in Jerusalem’s Kikar Hashabbat square featured large numbers of demonstrators dressed up like concentration camp prisoners, complete with striped uniforms and yellow “Jude” badges, the point—not subtly or covertly, but completely openly and defiantly—being that what the Nazis did to the Jews of Europe, the State of Israel is doing to the haredim. The tsunami of criticism that followed—including by leaders of marginally less extreme elements of the haredi world, like the Shas party, as well as by almost every political party in Israel, plus the leadership of Yad Vashem, the Simon Wiesenthal Center, and other organizations devoted to preserving the memory of the martyrs of the Shoah—does not seem to have impressed the demonstrators, who seem delighted to have caught the public’s attention with something as simple to assemble as prisoner’s garb. Indeed, an official of the organization that organized the protest was cited in the Jerusalem Post as saying that he had “no regret at all” about the use of Shoah imagery. As, I’m sure, does he not!

To us on the outside—and to me personally—this series of events is emblematic of how far from the Zionist ideal of a free Jewish people working shoulder-to-shoulder in the Land of Israel to create a state that reflects the finest Jewish values we have strayed. The haredi community in Israel is part of k’lal yisrael. They are pre-modern in countless ways. They seem proud of the degree to which they have rejected values that most of us find not only basic to life in a democracy, but basic to our sense of ethics and decency. I’m reminded of a rabbi, one of the few haredi types with whom I’ve had a personal relationship, who once bragged to me that he hadn’t ever read a book about Judaism, Jewish culture, or Jewish history that could possibly have challenged his faith or encouraged him to think about things even slightly differently than he had previously. This was years ago, but I’m sure he still hasn’t. As also haven’t the people in Kikar Shabbat who sewed yellow stars onto their kapotehs. I would like to think that was simply an act of ignorance undertaken by people whose knowledge of even relatively recent modern history is so rudimentary and so little sophisticated that they simply cannot see how grotesque it is even to suggest obliquely, let alone to say out loud and explicitly, that the way haredim are treated in modern Israel is related even tangentially to the way Jews were treated in Nazi Europe. I’d like to think that, but when I look into my heart I know that that is not what I really think.

What I really think is that the culture wars are only beginning, that the ultimate struggle to rescue Israel from the haredim is going to be as long, as protracted, and as painful as the struggle to win political recognition from the neighbors. I believe with all my heart in the concept of k’lal yisrael. But I also know that there is something counterintuitive about including in that concept people who themselves appear totally to have rejected it, who have made religious and political extremism into a virtue. There is hope, however. There are many in Israel, including many who are not formally members of Masorti congregations, who believe that it is entirely possible to be faithful to the commandments and to believe in the k’lal yisrael concept and to consider moral development to be something to embrace as a great good rather than to reject as something inherently evil. There are many Israelis who understand that the extremism of the haredi community—extremism that does not bridle, even, at insulting the memory of the k’doshim for the sake of making political hay—is not merely too much of a good thing, but something itself inimical to the future wellbeing of the state. Clearly, we have reached a tipping point. Where Israel goes from here will be the 2012 news story that will count in the long run.