As artificial intelligence increasingly enters digital forensics and evidence analysis, Indian evidence law is facing a difficult question: whether existing legal frameworks are equipped to handle machine-generated proof. The courts must move beyond treating AI outputs only as digital records and consider a clearer legal framework for algorithmic evidence and machine-generated expert opinion.
FCRF Academy Launches Premier Anti-Money Laundering Certification Program
AI has moved from simple database searches to advanced pattern recognition across forensic domains once dependent on human examiners. Tools such as TrueAllele, which use statistical machine learning to analyse complex DNA mixtures that may be difficult for traditional methods.
AI Tools Reshape Forensic Investigation
Beyond the laboratory, AI can identify microscopic striations on ballistic shell casings, assist in automated handwriting comparison and help establish probabilistic matches. In the digital sphere, such tools are described as capable of recovering deleted data, identifying malware and tracing anonymous blockchain transactions in cryptocurrency cases.
The growing role of AI-powered electronic discovery platforms, which can review large volumes of documents for relevance. It raises a central legal issue: whether an AI system’s expert opinion can be recognised under the Bharatiya Sakshya Adhiniyam, given that machine outputs depend on the data and assumptions fed into them.
Black Box Problem Tests Indian Evidence Law
The admissibility of AI-generated evidence in India would fall under Sections 60 and 63 of the Bharatiya Sakshya Adhiniyam for electronic records. Landmark cases such as Anvar P.V. and Arjun Panditrao Khotkar, have established the importance of Section 65B(4) certificates to ensure authenticity the evidence.
However, this framework remains restrictive for dynamic AI outputs. It identifies the main challenge as the “black box” problem, where the internal logic of an algorithm may be opaque even to its developers. This can create an analytical gap between raw data and the conclusion presented before a court. Because of this uncertainty, AI-generated proof in India remains at a rudimentary stage. The courts may need judges or separate human experts to re-verify the methodology and facts before relying on such material.
Global Models Offer Lessons for India
In the United States, judges act as gatekeepers under the Daubert standard, evaluating AI evidence on factors such as testability, peer review and error rates, while cases like State v. Loomis reflect tensions between proprietary trade secrets and due process rights.
The European Union’s 2024 AI Act is taking a regulatory approach by classifying forensic and judicial systems as high-risk and requiring human oversight and transparency. China uses AI systems to assist judges in identifying evidentiary issues and suggesting precedents. On the other hand, Colombia has UNESCO-backed judicial guidelines which were launched in December 2024.
For India, the transition from the Indian Evidence Act to the Bharatiya Sakshya Adhiniyam as an opportunity for reform. There must be a dedicated provision for AI , disclosure of system architecture, training data, and recognition of validated algorithmic systems as legitimate sources if they meet standards of reliability and validation.
India needs to bridge the technical literacy gap among judges and lawyers, ensure access to technical experts for the defence and guard against automation bias. A glass box approach must be used by the courts to examine the scientific validity of algorithms, rather than merely authenticating the device or record from which AI-generated evidence emerges.