
I serve as the Director of Marketing and Communications at a Jewish day school in Los Angeles with more than 520 students, from toddlers through eighth grade. My work depends on trust.
Families trust the photos, videos and stories we share about our school community. Recently, I hear parents express a new concern that may put that trust at risk. They worry about what could happen to images of their children once those images appear online. Even when accounts are private, parents know that once a photo exists on a device or platform, it can be copied, altered or redistributed by anyone with an internet connection.
For many Jewish parents, these concerns exist amid heightened fears about our children’s safety. With the steady rise in antisemitic incidents, families are often thinking about risks that others may not have to consider. There is an underlying question that plagues us each day: will my child be targeted? The possibility that images of our children could be manipulated or misused through artificial intelligence adds another layer to that concern, making it even harder to feel a sense of security in an already uncertain environment.
People are not going to stop taking pictures of their children or sharing milestones with friends and family. But the rise of generative artificial intelligence has changed the context in which those images circulate. Parents now must confront possibilities that would have seemed unthinkable just a few years ago. What happens if someone downloads a photo and uses AI to alter it? What if a child’s face is inserted into another image or video?
For decades, people treated photographs and video as reliable evidence of reality. If you saw it, you could assume it happened. Artificial intelligence is rapidly weakening that assumption.
Studies show that people struggle to distinguish authentic media from AI-generated content. A 2025 study by the biometric verification company iProov tested 2,000 participants in the United States and the United Kingdom and found that only 0.1% could correctly identify real images and videos versus deepfakes across all the examples they were shown. At the same time, more than 60% of participants believed they could spot a deepfake, even though almost none could do so consistently.
The problem is becoming increasingly common. A 2024 report from the identity verification company Sumsub found that detected deepfakes worldwide increased by more than 1,500 percent between 2019 and 2023. Financial scams using AI-generated voices are also rising. In one widely reported case in 2024, a finance employee at a multinational company transferred about $25 million after joining a video call that appeared to include company executives but was actually faked.
It is time to adopt a new standard. In the American legal system, for example, the standard is the presumption of innocence. A person is treated as innocent until proven guilty. The digital world now requires a similar but inverted standard. When it comes to images and video online, the safest starting point is the presumption that what we see is not authentic until it is verified.
Throughout history there are countless examples of how societies adapt to new tools. Early automobiles did not include seat belts. Engineers added them later, and eventually seat belts became mandatory safety features. The door was created first; the lock was invented subsequently. New technologies reveal risks, people become aware of those risks and safeguards follow. With artificial intelligence, some safeguards will come from technology itself. Companies are developing systems that identify AI-generated content, while researchers and governments are exploring detection tools and disclosure rules that require creators to label manipulated media.
In order to protect ourselves, we will need to adjust our instincts. For most of modern history, the natural response to a photograph or video was belief. In the age of AI, the safer response is skepticism. When a sensational image or clip appears online, the first thought should now be: this could be AI-generated. I should verify it before believing it.
Parents’ concerns about how images of their children circulate online are justified. Artificial intelligence has created real risks. At the same time, the solution will not be to stop documenting our lives or sharing meaningful moments. Jewish tradition teaches us not to delay a simcha.
Similarly, as Jews, we will not be deterred from celebrating and reflecting what matters most. We must not retreat from sharing our children’s lives, but do so with greater awareness.
In a world where digital media can be created or altered with ease, the safest assumption may be simple: fake until proven real.
Robyn Fener is Director of Marketing and Communications at Sinai Akiba Academy.
































