THE PROBLEM OF LEGAL RESPONSIBILITY FOR THE HARMFUL CONSEQUENCES OF USING ARTIFICIAL INTELLIGENCE SYSTEMS
DOI:
https://doi.org/10.33244/2617-4154-1(18)-2025-268-279Keywords:
artificial intelligence, legal responsibility, responsibility gap, civil liability, technological hallucination, law enforcement challengesAbstract
The article explores the issue of legal responsibility for damage caused by the use of artificial intelligence (AI) systems. It discusses the phenomenon of the „responsibility gap“ and its impact on legal regulation. The article analyzes conceptual approaches to determining the subject of responsibility in the context of autonomous AI functioning and the specifics of its learning process.
The challenges of law enforcement arising from the probabilistic nature of AIʼs operation, its ability to self-learn, and the „black box“ problem are examined. Special attention is given
to the issue of AI hallucination, when the system generates false or non-existent data, complicating the establishment of causal relationships and identifying the responsible party.
The article examines three main forms of „responsibility gap“: true gap, apparent gap, and the „dilution of responsibility“ caused by multi-level interactions of different subjects. Possible legal mechanisms for addressing these issues are analyzed, including the introduction of special responsibility regimes for AI developers, owners, and users.
The international experience of regulating AI is also discussed, with a focus on the provisions of the new European AI Act, which establishes a risk-based approach to responsibility. The conclusion is made about the need to adapt existing legal concepts and develop new regulatory models capable of accounting for the characteristics of autonomous systems.