One of the few things that Andrew Yang and I have in common
is that we both have about the same chance of becoming the next President of
the United States. Despite polling far behind the frontrunners, however, Yang strikes
me as in many ways the most original of all the would-be candidates vying for
the Democratic nomination and also, and by far, the most tech-savvy. And I owe
to him my renewed interest specifically in robotics and in the potential impact
machines provided with artificial intelligence might one day—and, according to
Yang, one day very soon—have on our American landscape.
When most people—or at least most people my age—think about
robots they generally think of unreal ones: Rosie the Robot Maid from The Jetsons, C-3PO and R2-D2 from Star
Wars, Robo-Cop
and Wall-E from the movies named for them, Optimus Prime and Megatron from the I’ve-lost-track-of-how-many
Transformer movies, and lots of other random androids and tin-plated automatons
dished up by Hollywood to the American public for their cinematic degustation. Mostly,
though, these robots are just souped-up metal versions of regular people who, just like their flesh-and-blood prototypes, vary dramatically in
terms of the strength of their moral fiber: some are good and some are evil; some
are adorable, while others are malevolent and seriously creepy; some can only
manage to do what human beings have pre-programmed them to be able to
accomplish, while others are able to strike out on their own and become
autonomous, or at least autonomous-ish, actors on the world stage. But the key
criterion the robots mentioned above all share is their non-existence: all are
made-up creations intended specifically to entertain as characters in movies or
on television shows and none of them is real.
For most people, then, robotics is merely the branch of
theoretical science that provides the ideational underpinning that makes R2-D2
real enough to be depicted in a movie that bills itself as futuristic, but not
completely fantastic. And that was what I thought as well.
Enter Andrew Yang, who opened my eyes to details of which I
had no idea at all.
Yang talks about the entry of robotics into the economic
mainstream, not as a semi-plausible plot for some futuristic science fiction
movie, but as a “fourth industrial revolution” already well underway. (The
first, stretching out from the end of the eighteenth century through the
beginning of the nineteenth, was about mechanization. The second, coming at the
end of the nineteenth century, had to do with the introduction of electrical
power. The third, during the second half of the twentieth century, had to do
with the advent of computer technology. And, at least according to Andrew
Yang’s understanding, the advent of robotics will bring in its automated wake
change just as total and societally transformational as in their day were the
introduction of computers or the invention of the mechanical engine.) Nor can
the numbers he cites be easily dismissed: the nation appears in the last decade
alone to have lost almost five million
jobs to
robotic automation. And the advent of self-driving trucks—in effect, car-robots—will,
so Yang, cost the nation another 8.5 million jobs if the number of soon-to-be
unemployed truck drivers is added to the number of soon-to-be-unemployed
workers in the various service industries that cater to truckers while on they
are on the road away from home. And Yang
predicts that the lost off 13.5 million jobs is only the beginning because, in
the end, the advent of robotics will totally, permanently, and irreversibly change
the American workplace. We either will or will not be ready. But what we will
not be able to do will be to stem it all off with wishful thinking any more
than people a quarter-century ago could have possibly halted the adoption of
computer technology in American offices no matter how sincere their desire might
well have been to protect workers with no computer skills from losing their
jobs.
And so, with my interest already more than merely piqued, I
found myself drawn powerfully to an extremely interesting responsum about Artificial-Intelligence-related
issues adopted by the Committee on Jewish Law and Standards last June. (The
CJLS is the highest legal authority within the Conservative Movement and the
ultimate arbiter of halakhic legality and illicitness.) Written by Rabbi Daniel
Nevins, currently the dean of the Rabbinical School at the Jewish Theological
Seminary, the paper has the tantalizing title “Halakhic Responses to Artificial
Intelligence and Autonomous Machines” and is an excellent example of just the
kind of incisive, well-researched writing that characterizes the CJLS at its
best. At almost fifty dense pages, it’s a big read. And a lot of it is couched
in technical language that will be of interest mostly only to rabbis and scholars.
But the larger picture is one of a thoughtful legist trying to respond to
something entirely new in the world by drawing from the wellsprings of history
and attempting to find contemporary relevance in lessons developed long ago by
people who wouldn’t have been able even to dream of C-3PO or Rosie the Robot,
let alone to imagine them actually
existing.
And yet the halakhah—the general term for Jewish
law in all its complexity, inventiveness, and perplexitude—has been mined in
the past to find responses to all sorts of new things, including steam engines,
hearing aids, computers, and space travel. So why not robotics?
The questions Rabbi Nevins sets out for himself to answer
boil down to three basic queries.
One has to do with the question of agency: can an
intelligent machine able to make autonomous decisions be considered the author
of its own deeds or must the responsibility of whatever R2-D2 does be laid at
the feet of his original programmer?
A second has to do with ethics: should autonomous, thinking
machines, including those programmed with the finest ethical principles, be
permitted to make life-and-death decisions regarding human beings or should the
ultimate responsibility for acting morally never be permitted to rest with machines—including those whose
ability to weigh data and simultaneously to compare tens of thousands of
precedents far outpaces the analogous ability even the brightest and most
learned human beings could possibly cultivate?
And the third has to do with religion in general and with
Judaism in specific, and asks whether a robot—or any autonomous, intelligent
machine—can perform a mitzvah or utter a prayer either on somebody
else’s behalf or, even more weirdly to consider, on its own behalf.
So those are Rabbi Nevins’s three core issues. Each in its
own way is a refocus of the single basic question that underlies them all,
however: can a machine capable of acting autonomously be taken seriously (or
ethically or legally) as a person? To push that envelope just slightly further,
I could ask if such a machine—or rather, once personhood is in some way deemed
to inhere in the warp and woof of its existence, if such a “person”—could be
deemed a Jew. Or, for that matter, if such a “person” could be supposed to
possess any of the factors that we use to distinguish between different
varieties of flesh-and-blood people like gender, nationality, ethnicity, race,
sexual orientation, etc. Can a robot be a black person or a gay person? Can a
robot be a man or a woman? This suddenly feels a lot more complicated than it
seemed on the Jetsons!
Rabbi Nevins deals with all these issues intelligently and
adroitly. (To read his responsum in full, click here.) And then, towards the end of
his paper, he gets finally to the section that strikes me as being the crux of
the matter, the one entitled “Androids as Religious Agents.”.
He begins by citing books by Gershom Scholem, Moshe Idel, Byron
Sherwin about the concept of the golem, the man-made creature that
entered halakhic discourse in the seventeenth century. And then he turns to the
sources themselves.
The Sefer
Yetzirah, generally
considered the oldest extant book of Jewish mystical speculation, apparently already—and this is a very old book we’re
talking about, one that some date as early as the second century CE—imagined
the possibility that the scriptural reference to the souls that Abraham “made”
in Haran was meant to be taken literally and that Abraham actually knew how to
create what we would call an android—a kind of artificial human being lacking
only speech and the kind of innate intelligence that can only come as a gift
from God. And, indeed, that idea that in the righteous individual could
conceivably inhere the ability artificially to create a living creature who
would then lack only speech is already present in the Talmud, where we read that
Rava, one of the masters of rabbinic Judaism in late antiquity, actually did
create a man, albeit one who could not speak. And his remark that, if they were
to wish it, “the righteous could create a whole world” of living creatures is
also recorded, and in that same talmudic passage.
These passages were eventually taken seriously. The eminent
halakhist, Rabbi Tzvi Ashkenazi (called the Ḥakham Tzvi, 1660-1718), for example, actually
penned a scholarly responsum dealing with the question of whether the kind of
person created artificially could be counted in a minyan, in a prayer quorum. (His
answer was no.) His son, the even more famous Rabbi Jacob Emden (1697-1776), also
took up the matter and determined that the speechless android is less like a
mute human being than like an animal in human form—and so his answer was also
no. Scholem and Idel discuss these sources and many others, but it was the late
Rabbi Sherwin who apparently first realized that these texts could reasonably
form the basis for a halakhic approach to technology in our day. Indeed, his
2004 book, Golems Among Us: How a Jewish
Legend Can Help Us Navigate the Biotech Century, is still in print and is
widely available. I recommend it to my readers highly.
Nevins spends time with all sorts of authors I haven’t read,
people like Giulio Tononi and Michael Graziano who write about the complex
interrelationship of consciousness, technology, and humanness—and thus about
the nature of personhood itself, about what it means to be a person. He
understands clearly that thinking about thinking machines is a way of thinking about
what it means to be alive, what it means to be a human being, even what it
means to exist at all. To imagine a world populated both by regular human
beings and by the kind of androids depicted in the recent HBO hit series
Westworld is simple enough. But to follow that thought through and attempt to
imagine how civil rights and ethical prerogatives might inhere differently in born-people
and made-people is, to say the least, daunting.
Andrew Yang is personally responsible for bringing this
issue to the national stage and we should thank him for that. Daniel Nevins has
effectively shown that there is more than enough water in ancient wellsprings from
which scholars can and should drink as they ponder these abstruse, confusing
issues, that he too deserves our thanks. But where exactly this will all take
us—that, at least as far as I can see—is still entirely up in the air.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.