Pepper, a humanoid robot designed to be “kindly, endearing, and surprising.” Photo courtesy of OECD 2016 Forum/Flickr.

Why we should fear emotionally manipulative robots

Artificial intelligence is learning how to exploit human psychology for profit

“Keep going straight here!”

“Err, that’s not what the app is telling me to do.”

“Yes, but it’s faster this way. The app is taking you to the beltway. Traffic is terrible there!”

“OK. I don’t know these roads.”

So went a conversation with an Uber driver in northern Virginia recently. But imagine it was a self-driving Uber. Would you even have that conversation, or would you be doomed to a frustrating 25 minutes on the beltway when you could have been home in 15?

And as your frustration mounts, will the AI driving the car recognize this—or appear to—and respond accordingly? Will customers prefer cars that seem to empathize?

Or imagine instead that you and your partner are arguing in the back seat over which route to take. How will you feel when your partner seems to be siding with the machine? Or the machine is siding with your partner?

Empathy is widely praised as a good thing. But it also has its dark sides: empathy can be manipulated and it leads people to unthinkingly take sides in conflicts. Add robots to this mix, and the potential for things to go wrong multiplies. Give robots the capacity to appear empathetic, and the potential for trouble is even greater.

To know why this is a problem, it helps to understand how empathy works in our daily lives. Many of our interactions involve seeking empathy from others. People aim to elicit empathy because it’s taken as a proxy for rational support. For example, the guy in front of you at an auto repair shop tells the agent that he wants his money back: “The repair you did last month didn’t work out.” The agent replies: “I’m sorry, but this brake issue is an unrelated and new repair.” The argument continues, and the customer is getting angry. It seems like he might even punch the agent.

But instead, at this point, the customer and the agent might both look to you. Humans constantly recruit bystanders. Taking sides helps to settle things before they escalate. If it’s two against one, the one usually backs down. A lot of conflicts thereby get resolved without violence. (Compare chimpanzees, where fights often lead to serious injury.) Our tendency to make quick judgments and to take sides in conflicts among strangers is one of the key features of our species.

When we take sides, we assume the perspective of our chosen side—and from here it is a short step to develop emotional empathy. According to the three-person model of empathy introduced by Breithaupt, this is not entirely positive, because the dynamic of side-taking makes the first side we take stick, and we therefore assume that our side is right, and the other side is wrong. In this way, empathy accelerates divisions. Further, we typically view this empathy as an act of approval that extends to our consequent actions, including, for example, lashing back at the other side.

Now let’s imagine that the agent at the repair shop is a robot. The robot may appeal to you, a supposedly neutral third party, to help it to persuade the frustrated customer to accept the charge. It might say: “Please trust me, sir. I am a robot and programmed not to lie.”

Sounds harmless enough, does it? But suppose the robot has been programmed to learn about human interactions. It will pick up on social strategies that work for its purposes. It may become very good at bystander recruitment. It knows how to get you to agree with its perspective and against the other customer’s. The robot could even provide perfect cover for an unscrupulous garage owner who stands to make some extra money with unnecessary repairs.

You might be skeptical that humans would empathize with a robot. Social robotics has already begun to explore this question. And experiments suggest that children will side with robots against people when they perceive that the robots are being mistreated. In one study, a team of American and Japanese researchers carried out an experiment in which children played several rounds of a game with a robot. Later the game was interrupted by an overzealous confederate of the experimenters, who ordered the robot into a closet before the game was over. The robot complained and pleaded not to be sent into the closet before the game could be completed. The children indicated that they identified socially with the robot and against the experimenter.

We also know that when bystanders watch a robot and a person arguing, they may take the side of the robot and may start to develop something like empathy for the machine. We already have some anecdotal evidence for this effect from traffic-directing robots in Kinshasa. According to photojournalist Brian Sokol in The Guardian newspaper, “People on the streets apparently respect the robots … they don’t follow directions from human traffic cops.” Similarly, a study conducted at Harvard demonstrated that students were willing to help a robot enter secured residential areas simply because it asked to be let in, raising questions about the potential dangers posed by the human tendency to respect a request from a machine that needs help.

It is a relatively short step from robots that passively engage human empathy to robots that actively recruit bystanders. Robots will provoke empathy in situations of conflict. They will draw humans to their side and will learn to pick up on the signals that work. Bystander support will then mean that robots can accomplish what they are programmed to accomplish—whether that is calming down customers, or redirecting attention, or marketing products, or isolating competitors. Or selling propaganda and manipulating opinions.

It would be naive to think that A.I. corporations will not make us guinea pigs in their experiments with developing human empathy for robots. (Humans are already guinea pigs in experiments being run by the manufacturers of self-driving cars.) The robots will not shed tears, but may use various strategies to make the other (human) side appear overtly emotional and irrational. This may also include deliberately infuriating the other side. Humans will become unwitting participants in an apparatus increasingly controlled by AI with the capacity to manipulate empathy. And suddenly, we will have empathy with robots, and find ourselves taking their sides against fellow human beings.

When people imagine empathy by machines, they often think about selfless robot-nurses and robot suicide helplines, or perhaps also robot sex. In all of these, machines seem to be in the service of the human. However, the hidden aspects of robot empathy are the commercial interests that will drive its development. Whose interests will dominate when learning machines can outwit not only their customers but also their owners?

Researchers now speculate about whether machines will learn genuine empathy. But that question is a distraction from the more immediate issue, which is that machines will not “feel” what humans feel, even if they get good at naming human emotions and responding to them. (At least for a while.) But in the near future, it doesn’t matter which emotions machines have. What is important is which emotions they can produce in humans, and how well they learn to master and manipulate these human responses. Instead of AI with empathy, we should be more concerned about humans having misplaced empathy with AI.

Colin Allen is a philosopher and cognitive scientist who has been teaching at Indiana University since 2004, but is moving to the University of Pittsburgh in fall of 2017. His research spans animal cognition, artificial intelligence, and foundational issues in cognitive science, and he is coauthor of the book Moral Machines: Teaching Robots Right from Wrong. Follow him on Twitter: @wylieprof.

Fritz Breithaupt is a humanities scholar and cognitive scientist at Indiana University. His current research focusses on empathy, narratives, and 18th century literature. His lab ( works with serial reproduction of narratives, that is, telephone games.  His newest book, The Dark Sides of Empathy, is forthcoming in spring 2018. Follow him on Twitter: @FritzBreithaupt.

This essay is part of a Zócalo Inquiry, Is Empathy the 20th Century’s Most Powerful Invention?

Photo courtesy of Intuition Robotics

Elli•Q: The robot who’s bubbe’s best friend

A new robotic companion is on a quest to alleviate loneliness and social isolation for older adults living alone.

Elli•Q — named for Elli, the Norse goddess of old age — is the brainchild of Intuition Robotics, a Ramat Gan startup pioneering social companion technologies. The robot’s mission is to be an “active aging companion,” keeping older adults engaged by helping them access and connect to today’s technologies, including video chats, online games, social media and other ways to stay in touch.

“We set out to create this company to have a positive social impact,” Dor Skuler, CEO and founder of Intuition Robotics, said. “While we don’t expect a robot or technology to be people’s friends or solve the problem of loneliness, we do think that technology can overcome barriers and bring people together in a way that’s not happening today.”

Loneliness is a growing public health concern worldwide.

“Loneliness and social isolation are the result of longevity, and technology has made the problem worse by requiring older adults to learn new technical skills in order to accomplish the simplest of tasks,” explained Skuler, a serial entrepreneur who also founded startups Zing and CloudBand.

Using “natural communication” such as body language, speech interface, sounds, lights and images to express herself, Elli•Q is designed to be emotive, autonomous and easily understood. She uses machine learning to acquire knowledge of the preferences, behavior and personality of her owner, and proactively recommends activities based on that history and recommendations by family.

“Our goal is to leverage a combination of our proprietary technology, gerontology know-how and elegant design to empower older adults to intuitively interact with technology and easily connect with content and loved ones, and pursue an active lifestyle,” Skuler said. “We like to think of Elli•Q as part communication coordinator, part arbiter of lifelong learning and part coach. She’s easy to talk to, simple to operate and understands her owner.”

A prototype of Elli•Q made its public debut at the Design Museum in London as part of its “New Old: Designing for Our Future Selves” exhibition running through Feb. 19. It looks at how new approaches to aging — from robotic clothing to driverless cars — can help people lead fuller, healthier, more rewarding lives into old age.

According to Age UK, nearly half of all people 75 and older live alone, and more than 1 million in the U.K. say they always or often feel lonely. Thirty-six percent speak to fewer than one person per day and 11 percent say they spent five days or more a month without seeing anyone.

Moreover, the organization reports that older adults living in isolation increasingly rely on technology rather than face-to-face interaction, yet they often find the technology confounding. Of course, keeping older adults physically and mentally active is important for health and cognitive reasons.

As such, Intuition Robotics programmed Elli•Q to prompt users to engage. Elli•Q recommends TED talks, music or audiobooks; the robot suggests physical activities such as going for a walk. It’s also a personal assistant and reminds users about appointments or taking medications.

Intuition Robotics’ multidisciplinary team of roboticists, industrial designers, full stack developers, Android developers, gerontologists and machine-learning experts have meshed hardware and software, machine learning and computer vision, psychology and design. Elli•Q’s design is a collaboration with famed industrial designer Yves Béhar and his studio, Fuseproject.

“The idea of having a robot companion is quite dystopian, especially for older generations. Through years of research, we were able to develop a design language and user experience that feels natural, with subtle expressions to develop a unique bond between Elli•Q and its owner,” Béhar said. “Elli•Q could never replace human interaction, but it can be an important motivating factor in keeping older adults healthy and active when living alone.”

At the moment, Elli•Q is only a prototype. Skuler says the robot will enter early trials in San Francisco and Israel in February.

Intuition Robotics has raised seed funding from investors, including Bloomberg Beta, Terra Venture Partners and OurCrowd.

“A lot of our success or failure with Elli•Q will be in our ability to create a full experience with older adults and help overcome the complexity of the digital world,” he said.

If Elli•Q succeeds in its mission, it could launch the next trend of smart social robots in homes the world over.

Should robots count in a minyan? Rabbi talks Turing test

Robots can hold a conversation, but should they count in a minyan?

A chatbot at Britain’s University of Reading was heralded this week as passing the Turing test, showing a conversational ability that managed to fool people into thinking it was human.

Using the fictional identity of a 13-year-old Ukrainian boy with the name Eugene Goostman, the robot convinced a third of a panel’s members that they were interacting with a fellow human being.

While some have expressed skepticism about the achievement’s significance, the advance of artificial intelligence raises profound questions.

“From the practical legal perspective, robots could and should be people,” Rabbi Mark Goldfeder wrote in an article published on CNN’s website in response to the robot’s feat. “As it turns out, they can already officially fool us into thinking that they are, which should only strengthen their case.”

Goldfeder, a fellow at Emory University’s Center for the Study of Law and Religion, is working on a book on robots in the law tentatively titled “Almost Human.” An Orthodox rabbi, Goldfeder spoke via online chat with JTA about whether robots could some day be welcomed as members of the Jewish community and what the Jewish tradition has to say about this issue.

JTA: What got you so interested in the topic of robots in Jewish law?

Goldfeder: It was a natural evolution from apes actually. I started off looking at the line between humans and non-humans in Jewish law, and realized that the demarcation was not as clear cut in ancient times as appears to be now.

Throughout the discussions in rabbinic literature we find creatures like Bigfoot, mermaids, centaurs, etc., and yes the golem, who in many ways resembles a robot.

Once you assume it may not be a strictly speciesist argument, the move from great apes to robots is quite understandable — given, of course, the caveat the robots may not be technically alive in the classical sense.

What are the basic criteria that would make a robot/monkey/mermaid Jewish?

Well, we start with the Talmud in Sanhedrin, which tells us the story of Rava sending a golem to Rabbi Zeira. Rabbi Zeira ends up figuring out that the golem was not human — it couldn’t communicate effectively and couldn’t pass the Turing test, apparently — and so he destroys it.

The halachic literature asks why this was not considered “ba’al tashchis,” wasteful, since maybe the golem could have counted in a minyan.

While they conclude that this golem at least was not able to be counted — they leave open the possibility of a better golem counting — it seems then that creation by a Jewish person would give the golem/robot presumptive Jewish status. For living things there is always parentage and conversion.

I should of course clarify that this entire discussion is “l’halacha v’lo l’maaseh,” a theoretical outlaying of views.

Good clarification, though being a robot seems like a convenient excuse to opt out of a bris.

In halachic terminology we would consider him “nolad mahul” (i.e., it is like he comes from the factory pre-circumcized).

Theoretically speaking, say a robot walked into your office and said, “Rabbi, I want to count in the minyan.” Would that be enough evidence for you to count him?

Not necessarily. For the purposes of this discussion, I would accept the position of the Jerusalem Talmud in the third chapter of Tractate Niddah that when you are dealing with a creature that does not conform to the simple definition of “humanness” — i.e. born from a human mother or at least possessing human DNA, but it appears to have human characteristics and is doing human things — one examines the context to determine if it is human. When something looks human and acts human, to the point that I think it might be human, then halachah might consider the threshold to have been crossed.

This makes sense from a Jewish ethical perspective as well. Oftentimes Jewish ethics are about the actor, not the one being acted upon. If I see something that for all intents and purposes looks human, I cannot start poking it to see if it bleeds. I have a responsibility to treat all that seem human as humans, and it is better to err on the side of caution from an ethical perspective.

In your opinion — more sociological than halachic — what’s your read on how seriously should Jewish institutions be preparing for the eventuality of artificially intelligent congregants or constituents?

I think the difference between science fiction and science is often time. If you were to ask me now, I don’t think Jewish institutions need to start worrying about it quite yet. Even with the Turing test officially passed, we are quite far from the situation of having a robot capable of walking among us unsuspected. But I do think that Jewish thinkers should start tossing around the questions because we’re probably 30, not 100, years away.

Hoop, there it is! Milken’s robotics team scores big

When “Sir Lancebot,” the motorized basketball-playing robot built by the Milken Community High School’s robotics team, made its debut appearance at a regional competition in San Diego in early March, the results were not encouraging.

The team, officially called the Milken Knights, but more often identified as team No. 1856, spent the competition’s first day frantically working to make the robot run and the entire second morning stripping it down to comply with the 120-pound weight limit. When Lancebot finally made it onto the field on a Saturday afternoon, it instantly crashed into another machine, shattering its own electronic board. By the end of the third and final day, the repaired Milken robot had managed to score just one point.

“It was kind of a disappointment in San Diego, but nobody just gave up,” Jonathan Zur, an 11th-grader and the team’s co-captain, said. “We all knew we could do better.”

Two weeks la-ter, at a regional competition in Long Beach, they did just that. The team’s 22 middle- and high-school students earned Lancebot a second-place finish, the best result for the Milken team in its six years of entering the competition.

The mission of the program known as FIRST (For Inspiration and Recognition of Science and Technology), in the words of founder Dean Kamen, is “to transform our culture by creating a world where science and technology are celebrated and where young people dream of becoming science and technology leaders.” The nonprofit organization has been holding international competitions for robots designed and built by high school students for the past 21 years, and at the Milken campus on a Friday afternoon in late April, its transformative power was evident.

Even while 400 other robotics teams from around the world were participating in the championship round of the FIRST Robotics Competition in St. Louis — which the Milken Knights came close to, but did not qualify for — a few members of the team were still only too happy to demonstrate their robot’s abilities.

“We’re running a special drive-train called West Coast Drive, which has six wheels, and the center wheel is lowered so the whole robot can tip back and forth,” Michael Bick, an eighth-grader, said. “You have a smaller wheel base, and so that allows you to turn more efficiently.”

Lancebot is powered by a battery about half the size of that of a typical car, and it includes mechanical and pneumatic as well as electronic parts. Like all of this year’s robotic entries, the Milken machine had to be able to maneuver around a field about half the size of a regulation basketball court on which it had to launch small, foam basketballs through one of four hoops mounted at the ends of the court and retrieve those balls either from the floor or from the human operators standing at the court’s edges.

In essence, the robot had to be able to play basketball. But if that task appears straightforward, designing and building a robot to do those things is anything but.

“There’s a lot of student enthusiasm, and they’re doing high-level stuff here,” Roger Kassebaum, director of the Mitchell Academy of Science and Technology at Milken and the robotics team’s mentor, said.

This year, for the first time, the Milken robot was designed entirely on a computer before fabrication even began. Bick did all the computer-aided design, or CAD, using a computer that was built by fellow teammate Josh Rusheen, who is in the 11th grade.

And the student work isn’t exclusively technological.

In competition, three robots, each from a different team, compete together, so their makers have to learn how to cooperate with people they’ve just met. And because fielding a robotics team can be expensive — on top of teacher salaries, Kassebaum estimated that the program costs about $20,000 annually to run — fundraising and developing partnerships with local businesses and corporate sponsors is also important.

“This year, we made a brochure and launched a more developed version of our Web site,” said Milana Bochkur Dratver, one of two female members of the team. Dratver, who started on the team last year, when she was in ninth grade, mostly focuses on public relations for the team.

On the field, she said, one major reason for Team 1836’s success was Lancebot’s performance in the first 15 seconds of each match, when all robots have to act independently, without any human guidance.

“Our programmer, Daniel Kessler — this was his very first year,” Dratver said. “He’s a ninth-grader, and he was able to program our autonomous round. It was very successful.”

Baskets scored during the autonomous period are worth significantly more than baskets scored during the remaining two minutes of each match, when drivers control the robot.

Milken’s robotics team has become a selling point for prospective students.

“I was considering either Milken or Harvard-Westlake,” said Austin Shalit, an eighth-grader and the team’s pneumatics captain. “I came here because I was very drawn by the robotics and science research. That’s what really made the decision for me.”

“The robotics team is absolutely why both of my kids came here,” said Hal Schloss, a former software developer who acts as the software and Web site mentor for the team. His son and daughter, now both in college studying computer science and aerospace engineering, both served as captains of the robotics team at Milken.

Schloss, who has, with his wife, provided Shabbat meals for the team during competitions for at least the last three years, said Shabbat observance can be difficult, particularly for Orthodox Jews like himself. As for his children, when they competed, Schloss said, “I didn’t look too hard. They did more than I would’ve liked.”

Kassebaum said he doesn’t know of any other Jewish day schools in the United States that field robotics teams in the FIRST competition. In Israel, where competitions are not held on Shabbat, it’s a different story.

“There’s a Tel Aviv regional,” Kassebaum said, and the Milken team competed there in 2010. “We ended up being finalists.”

Israeli-built robots shoot for U.S. competition

Forward Omri Casspi made the leap from Israel to the National Basketball Association in 2009, but the latest Israeli hoopsters seeking to compete on American soil aren’t human.

Earlier this month, several thousand spectators watched student-built robots from across Israel square off for two days on a custom-sized basketball court at Tel Aviv’s Nokia Arena.

Dozens of high school teams built their own robots for a chance to represent Israel in the FIRST (For Inspiration and Recognition of Science and Technology) World Championship, to be held at St. Louis’s Edward Jones Convention Center from April 25-28. This year’s St. Louis-bound teams include Team Elysium from Maccabim-Reut-Modiin’s Mor High School, Team Orbit from Binyamina’s ORT High School, and Raptor Force Engineering from Jim Elliot High School in Lodi, Calif.

FIRST is a worldwide non-profit that encourages students to explore and develop their abilities in STEM (Science, Technology, Engineering and Math) disciplines in a fun and supportive environment. Founded in 1989 by technologist and Segway inventor Dean Kamen, FIRST currently has branches in five countries—Brazil, Canada, Israel, Mexico and the U.S.—with over 250,000 school-age children and 68,000 adult team mentors participating annually in competitive events.

Six weeks ahead of the regional final in Tel Aviv, 46 teams of high school students and their adult mentors were tasked with using their knowledge of science and engineering principles to build game-play robots. The student-built robots were required to have the following basketball-related capabilities: shooting free-throws; gathering rebounds to convert field goals; and attempting to balance between one and three robots on seesaws placed in the middle of the court.

During the season-ending playoffs, teams had to take things one step further and forge alliances with two partner teams—a process that resembled a schoolyard kickball draft.

Kamen—whose father, well-known American Jewish comic illustrator Jack Kamen, designed the FIRST logo—was a highly visible figure in this year’s regional competition in Israel. Wearing a bright red Hawaiian shirt, the younger Kamen served as a referee and an English-language game announcer during the two-day event.

Among the robots at the competition, one standout presence was a bright pink robot developed by an all-girls team called “Ladies FIRST,” from Beersheba’s Ulpana Amit religious high school. Sponsored by Beersheba Municipality and Ben-Gurion University’s jointly run INBAL Project (which encourages teenage girls to pursue studies and careers in science and engineering), the team of plucky young women from the Negev were excited to make the final round.

“We are the first and only all-girls team to the join the competition,” said team captain Tal-Or Wartzmann, amidst the raucous cheers of her teammates. “We girls set up the team through our own efforts. The girls came together, and we found corporate sponsors and got [Beersheba] city hall and Ben-Gurion University to join the effort.”

Not all of the fun belonged to the teenagers. Also attending the two-day event were local political figures and business leaders in both Israeli and American industry, including Tel Aviv-Jaffa Mayor Ron Huldai, Bank Hapoalim Chairman Yair Serussi and FIRST Israel co-founder Josh Weston.

“The mayor views the scientific disciplines as an important field of study and [believes] that any initiative that succeeds in challenging the youth and developing their capacity for advanced thought is an interesting and welcome initiative,” Huldai’s office wrote in an email to JointMedia News Service.

Tel Aviv City Hall, Huldai added, is “pushing forward a strategic effort towards solidifying its standing as the Silicon Valley for firms outside of the United States.”

FIRST Israel certainly has appeared on the radar of young technology aficionados outside the country. Two U.S.-based teams from Christian high schools located in Lodi, Calif., and Marshall, Va., chose to compete in this year’s regional championship.

“Our team mentor has been talking about coming to this competition a couple years now and this is the first time we’ve actually had enough money to make the trip,” said 17-year-old Fresta Valley High School senior Christian Berryman. “We are, like, famous here because we are one of two teams from America. Everyone comes up and shakes our hands. It’s very cool!”

Judea Pearl, father of slain WSJ reporter, is a leader in artificial intelligence

A man arrives at an airport for a flight, and as he goes through security the agent asks some questions.

Did anyone help him pack his suitcase? What is the purpose of his trip? Is anyone accompanying him?

During the conversation, the agent enters answers and facial reactions into a computer pre-programmed with millions of pieces of information relating to the behavior of suspicious passengers.

Such man-and-machine collaborations, in this instance to detect terrorists, are not yet in place at airports. But they already are in use in fields ranging from medicine and genetics to Microsoft diagnostics and Google searches.

Underlying the remarkable advances in the partnership between humans and machines are research studies in artificial intelligence. AI is the subfield of computer science that aims to discover the fundamental building blocks of thought, creativity, imagination and language—those elements of the mind that make us intelligent.

Prof. Judea Pearl of the University of California, Los Angeles (UCLA) is among the internationally recognized pioneers in the field, and on March 29 he will add to his string of honors and awards the Harvey Prize in Science and Technology from the Technion-Israel Institute of Technology.

Pearl was selected for this recognition, which carries a $75,000 honorarium, for his “wide-ranging and keen research,” which has led to “his foundational work that has touched a multitude of spheres in modern life,” according to the citation.

Pearl, 75, born and raised in the Orthodox enclave of Bnei Brak, near Tel Aviv, leads a bifurcated life. As a professor emeritus, he teaches a class and guides doctoral students at UCLA. This, and his continuing research, takes up about half of his time.

The other half is devoted to the Daniel Pearl Foundation, headed and established by him and his wife Ruth following the 2002 kidnapping and murder by Pakistani extremists of their son Daniel, a reporter for the Wall Street Journal.

The foundation seeks to perpetuate Daniel’s ideals, and each year it organizes the Daniel Pearl Music Days around Daniel’s Oct. 10 birthday. This year, the event was celebrated with 2,091 separate concerts and performances in 84 countries, among them such unlikely venues as Saudi Arabia and Iran, according to the foundation.

The foundation also runs a fellowship that each year brings three working journalists from Muslim countries to the United States for five-month internships at U.S. newspapers, including the Wall Street Journal, The New York Times, and the Los Angeles Times, and for one week at the Jewish Journal of Los Angeles.

In his effort to draw some meaning from his son’s murder, the computer scientist-cum-philosopher has evolved into a forceful public speaker and newspaper columnist, including frequent commentaries in the Jewish Journal.

All the while, he’s continued to distinguish himself in the field of computer science. In 2008, when he received the Benjamin Franklin Medal in Computer and Cognitive Science from the Franklin Institute, Pearl was credited with research that “changed the face of computer science,” while his three books “are among the most influential works in shaping the theory and practice of knowledge-based systems.”

His combined work schedule has left Pearl little time to pursue his previous avocations as leader of a Hebrew-language choir, singer, guitarist and collector of rare, early editions of books on Judaica, philosophy, and history of science.

In his professional research, Pearl sees the interaction between humans and computers as a two-way street, in which humans infuse knowledge into machines, mainly in the form of natural language and graphs. The computer, in turn, sharpens human understanding, to the point where, Pearl says, “The only way to learn more about ourselves is by programming robots to emulate our behavior and, in this way, learn the architecture of the human mind.”

Pearl’s major contribution to the two-way dialogue between man and machine has been, first, in the area of uncertainty, a constant in every human endeavor, and later in causality, the relationship between cause and effect.

In our daily lives “we are prisoners of uncertainty,” Pearl says. He offers as an example a doctor’s examination of a patient. Using his knowledge and an array of sophisticated tools, the doctor will try to diagnose the patient’s symptoms and devise a treatment. However, even the best physician often can’t be certain he is prescribing the best possible cure.

The doctor’s computer can’t be certain, either, but it can review and combine thousands of pieces of information and offer the doctor a choice of the most promising treatment options.

Besides the ability to manipulate and recombine innumerable bits of information almost instantly, the robotic or computer helper can follow the resulting rules more consistently than a human, Pearl said.

But even so basic an example as a medical diagnosis involves tens of thousands of facts and rules, which must be programmed by a human and digested by the computer.

Pearl’s next step was to fuse, or break down, this mass of facts and formulas into what he labeled “Bayesian networks,” in honor of Thomas Bayes, an 18th century English mathematician. The networks mimic the neural activities of the human brain, constantly exchanging messages without benefit of a supervisor.

The research on uncertainty occupied Pearl for much of the first half of his career, and when it was finished in the late 1980s he turned his attention to the theory of causality to further advance the computer’s learning process.

Causality seems a fairly simple concept: We step on the gas pedal and the car accelerates. However, it’s easy to confuse this with the mere association between occurrences.

For instance, the word “malaria” is a contraction of the medieval Italian “mala” and “aria,” meaning “bad air,” because people who came down with the disease had often been near a swamp and breathed its foul air. Only later was it discovered that it was not the air that triggered the disease, but mosquitos that bred in the swamp.

One case in which a computer helped involved a lengthy study at Montreal’s McGill University that sought to prove that warm-ups before a game reduce the number and severity of sports injuries.

The researchers gathered statistics from numerous teams but also had to take into account such diverse factors as the types of warm-ups, attitudes of different coaches and players, ages of the team members and their previous injuries, pressures on the teams to win, fatigue from previous games, and so on.

For humans, it was impossible to juggle all these factors for hundreds of players, and the best that could be done was to establish some general associations between warm-ups and injuries.

Computers, however, could absorb and combine all these factors, judging how they affect each other, and come up with appropriate cause-and-effect relationships.

Pearl is now exploring ways of programming computers to reason introspectively and to take responsibility for their actions.

Such a project conjures up sci-fi scenarios of robots eventually outsmarting and subjugating their human inventors—a possibility to which Pearl says has devoted considerable thought.

While he believes that, at least in theory, “everything man can do, robots can do better,” he hopes that futuristic robots can also be indoctrinated with humanistic values and sensibilities.

The one exception to the robot’s perfectibility may be to instill and install a sense of humor.

“If we can make a computer come up with a funny joke, whose point generally rests on a failure of our anticipated expectation, we will have reached the pinnacle of success,” Pearl said.

Science program helps six Milken grads head to MIT

Six graduates from Milken Community High School’s 2008 class will enroll this fall at Massachusetts Institute of Technology, a campus that features among its alumni 26 Nobel laureates, more than one-third of all U.S. astronauts and former Israeli Prime Minister Benjamin Netanyahu. The academic pressure at MIT is notorious, and one of the Milken grads, Richard Dahan, spent some time this summer warming up with a rigorous study program in preparation.

Dahan has always been interested in math and science, he said, but it was Milken’s Mitchell Academy of Science and Technology (MAST) and the program’s director, Roger Kassebaum, that provided him with the discipline and the opportunities to explore, which helped him get into MIT.

“What made MAST great was not only that it exposed me to various technical fields in such extreme depth, but also that it transformed my interests in those fields into passions,” said Dahan, who plans to study mechanical engineering and management.

MIT received more than 13,000 applications from students for fall 2008, of which it accepted less than 12 percent. Milken’s impressive showing of six graduates from its class of 2008 includes four from one family — Richard Dahan, along with his three siblings Daniel, Sara and Robin — and Neta Batscha and Stephen Hendel.

Milken’s success in placing students at MIT, as well as other prestigious universities, speaks well of the academic strides the school is making through its Centers of Excellence, which include the Advanced Jewish Studies Center and the Stephen Wise Music Academy. But even more impressive is that the science and technology academy is a center in name only. The students spend time doing research in real-world labs, rather than trying to replicate the experience in a classroom.

“We spend money on kids, not bricks,” said Kassebaum, the man whom many MAST students credit with helping to make their higher-ed dreams come true.

Established in 2003 with financial help from the Edward D. and Anna Mitchell Family Foundation and the Kayla Mitchell Foundation, MAST has taken an active role in moving students beyond the classroom.

“One of the things we were really committed to when we started the academy is that kids were not going to fit into the typical box of science classes,” said Jason Ablin, Milken’s head of school.

MAST features several short-term tracks for learning (robotics, physics and engineering) with competitions, but most of the center’s energy is directed toward a three-year science research course that encourages students to work in laboratories in the United States and Israel. Currently there are 30 students in the three-year track. Once premier students reach their senior year, they prepare their research for the national Intel Talent Search, the top science competition for high school students.

While MAST students so far have achieved semifinalist placing in the Intel competition, in 2007, graduating seniors Michael Hakimi and Talia Nour-Omid won the first-ever X PRIZE competition for high school students for developing a model for biomonitoring sunglasses to keep space travelers healthy during civilian spaceflight.

Both students will attend USC this fall. While Nour-Omid will study computer science, Hakimi’s interests lie in business. He applied scientific research principles to market analysis, writing a paper for the Intel competition on the effects of terrorism on financial markets.

Kassebaum said the students who enter MAST are mostly average kids who are encouraged to discover what excites them.

“They are given free rein. There’s no teacher holding them back,” he said. “In science research there’s no limit. Their job is to find the edge of their field of interest.”

Once students’ areas of interest are narrowed and they feel comfortable reading articles in scientific publications, they can approach— or “begin almost stalking,” as Kassebaum put it — graduate students about working beside them in a university lab environment.

Batscha, who will attend MIT and will likely study bioengineering, worked at a Cal State Northridge microbiology lab to see whether she could use single-cell microorganisms, called methanogens, to improve ethanol production from plant waste. “I wanted to do something that could impact the world,” she said.

While the experiment didn’t yield the desired results, Batscha did discover that methanogens could be grown with yeast.

MAST student Hendel, who is bound for MIT, spent time at UCLA studying how chemical changes in DNA can play a role in the development of the central nervous system. “I was interested in manipulative genetics and silencing genes and looking into that research,” he said.

Although his paper wasn’t published, he said a graduate student was able to use his research in a project.

While the program can be a time-consuming, stressful addition for students who try to balance the demands of school with the college application process, MAST participants all speak glowingly of the program’s director and the support he provides.

Kassebaum, who is not Jewish, has been with Milken for nine years. He taught for 22 years at Millard North High School in Omaha, Neb., where he won the Presidential Award for Excellence in Science Teaching in 1991, among other honors. After Kassebaum received a Milken Educator Award from the Milken Family Foundation in 1997, Milken Community High School implemented some of his teaching methods and then began actively pursuing him to join the school.

Since co-creating the Mitchell Academy in 2003 with support from former Milken head of school Rennie Wrubel, Kassebaum has regularly encouraged students to choose their own interests and helped them organize their research.

For Richard Dahan, who was in the science academy with his brother, Daniel, and sister, Robin, trying to make a distinction between Kassebaum and MAST would be difficult.

“He is what makes the academy special. Aside from his unparalleled dedication to every MAST activity — whenever we stayed at Milken until 1 a.m. or 2 a.m. to work on a project, he was right there with us — he also brought an amazing attitude and always guided us in the right direction,” Dahan said.

VIDEO: Woody Allen and the Jewish robots (from ‘Sleeper’)

Woody Allen is fitted for a new suit by robot Jewish tailors—from ‘Sleeper’