Rabbis Without Borders
Rabbis Without Borders is a dynamic forum for exploring contemporary issues in the Jewish world and beyond. Written by rabbis of different denominations, viewpoints, and parts of the country, Rabbis Without Borders is a project of Clal – The National Jewish Center for Learning and Leadership.
On Monday evening, over 100 people gathered at my congregation to hear Dr. Jeremy Wertheimer, a Ph.D. in Artificial Intelligence and a VP at Google, reflect on the ways that technological innovation may be impacting the human condition. The program was our keynote event, made possible by a grant from Scientists in Synagogues from Sinai and Synapses. We are one of 11 congregations in North America who were chosen for this first cohort of communities that are exploring the interface of science and religion and the ways that they can be in conversation with each other.
It was a fascinating evening, engaging everyone with thought-provoking questions. There were three primary topics that we explored.
The first was whether the ability of AI to present choices to us that it thinks we will like or want changes our sense of free will. It is certainly true that AI can help us narrow down the vast number of choices for how we will fly from point A to point B, and can help Netflix suggest things that we might like to watch based on our previous choices and tastes that we have shared with the service. Facebook puts certain ads, certain news articles, and certain friend updates on our feed from a potential list of thousands, based on things it knows about our friendship connections, interests, and tastes. In the conversation we had, it was suggested that the basic job being done is not so different from the ways that other influencers informed us, marketed and advertised to us, and sought to shape our tastes, in previous eras. When we think of a problematic example, such as the way that fake news stories used those same algorithms on Facebook to insert themselves in front of people who were likely to be swayed by them the problem, Dr. Wertheimer suggested, is a technological one that can be fixed. And, in fact, Facebook has already taken some steps, and is exploring how to do more, to recalibrate and tag these kinds of news stories so that they are less likely to make an appearance on our walls.
The second question we explored was one of ethics and, broadly speaking, can a machine be ‘programmed’ to be ethical? In discussing this topic we learned about ways in which AI actually make ‘decisions.’ When we are looking at something like self-driving cars, we are seldom talking about a line of code that tells the car to do something in a particular circumstance. Rather, AI has the ability to look at scenarios or outcomes from enormous data sets gleaned from the collected experience of every self-driving car on the road. From this, it learns to make a choice that has the likelihood of leading to the best outcome. And we, as human beings using this technology, are not free of responsibility. Dr. Wertheimer presented the example from Jewish tradition of an ox that gores, causing harm to another ox or to a human being. We find this first presented in Exodus 21:28-29, but there is significant discussion of a variety of situations involving the ox in the Talmud (Bava Kamma). As in the Talmud, we are responsible for taking action that will minimize the harm that an ox can do. If, knowing that it is capable of causing harm, we fail to take those precautions, we are held liable. That is true today when we take a car on the road (where thousands die in automobile accidents every day), and it will be true even when a self-driving car may help to minimize harm caused when a driver is distracted, has been drinking etc.
The third main question we explored was a sense of self. When some people consciously ‘curate’ their online content about themselves, while others seem to share everything without consideration for the potential consequences (to themselves or to others), has the technology impacted our sense of self? Here, Dr. Wertheimer suggested the use of common sense. We looked at the ways in which having a faith practice that teaches about the power of the word to create or destroy, that encourages modesty, and sees the spread of gossip as impacting on the life and reputation of another as if we had committed murder, can help to anchor and guide us when we are navigating these technological waters.
While I have only scratched the surface with a few examples of the fascinating conversation we had, one of my main takeaways was that, while the context may have changed, the essential questions we were asking are ancient ones. Human beings have long wondered about the extent to which we truly have free will, or whether the path we travel is pre-ordained. These questions have been addressed by philosophers and theologians of every faith tradition. Asking questions about who I really am, and how I bring my whole presence to the interactions I have with others is a spiritual question that is addressed in prayer, meditation, or the writings of Brene Brown. Online is simply a newer space that we occupy where questions of self-portrayal manifest in other ways.
What we observed was the existence of considerable wisdom and philosophical inquiry in Jewish tradition where our ancestors, dealing with the technologies of their time, were already asking these questions about the human condition. When we hold up these examples, and consider how they might help us navigate new, rapidly unfolding territory, we are not only grounding ourselves in our rich, ethical tradition. We are also joining a conversation that may have started in the pages of Talmud, but has continued among Jews for over two millennia.