A family in China has reportedly been using an artificial intelligence clone of a dead man to speak with his elderly mother, who is said to be unaware that the person appearing in video calls is not her son. The case, cited in reports carried by the South China Morning Post and attributed to Chinese outlet Litchi News, has drawn attention to the growing use of AI tools to recreate the voices and appearances of the dead, while raising difficult questions about grief, care for the elderly, and the ethics of deception.
A Family’s Attempt to Conceal a Death
The reports describe the woman as an octogenarian living with heart disease. Her family, based in Shandong province, reportedly wanted to shield her from the news that her only child had been killed in a road accident.
To sustain that effort, the woman’s grandson is said to have approached an AI businessman and supplied pictures, videos and audio recordings of his recently deceased father. The businessman reportedly said his work was aimed at comforting the living, while also remarking that he was in the business of deceiving people’s emotions.
The AI version of the dead man was then used in regular video calls with the woman. During those conversations, the digital clone reportedly told her that her son had moved away and could not meet her in person.
FCRF Returns With CDPO, Its Premier Data Protection Certification for Privacy Professionals
Conversations With an AI Clone
In one exchange quoted in the report, the mother urged her son to call more often so she could know whether he was living well in another city, and said she missed him deeply and was sorry she could not see him in person.
The AI clone replied in language presented as that of a dutiful son, saying he was too busy to talk for long, urging her to take care of herself, and promising that once he had earned enough money he would return home and fulfil his filial duty.
The account has been presented as an example of how far AI technology is moving into emotionally sensitive areas of life. As systems become more capable of reproducing a person’s voice and likeness, the case suggests that such tools may increasingly be used not only for remembrance, but also to manage distressing truths inside families.
Questions of Verification and Ethics
The story has also prompted caution. While the report notes that Litchi News appears to exist and is linked to Jiangsu Broadcasting Corporation, China’s third largest television network, the original article could not be located and the claims described in the account could not be independently verified.
Even so, the case has stirred a sharp public reaction online. Some internet users cited in the report said the family’s actions had gone too far, arguing that the deception could inflict deeper harm than the truth itself.
The episode has left unresolved a broader question at the centre of the debate. As artificial intelligence becomes more persuasive in replicating the dead, families and caregivers may increasingly face the issue of whether emotional protection can justify deliberate falsehood, especially when the person being deceived is elderly and vulnerable.