employee
Russian Federation
004.89
A gap in knowledge has been discovered: artificial intelligent systems are sometimes mistakenly attributed the properties of subjectivity. The hypothesis is expressed and proved that such errors are based on the accentuation of the ability of artificial intelligence systems to solve intellectual tasks, when this possibility is incorrectly perceived as a sign of the existence of other properties of the subject in the technical system: will, responsibility, and consciousness. The communication capabilities of modern artificial intelligence systems that simulate communication with humans contribute to perpetuating this error. It has been proved that artificial intelligence systems are a technological tool, for the creation and use of which only a person can be responsible, and only a person, in turn, can act as a subject of thinking and activity.
artificial intelligence subjectivity, artificial intelligence agency, artificial intelligence quasi-subjectivity, artificial intelligence responsibility, artificial intelligence legal capacity
1. Lektorsky V. A., Alekseeva E. A., Emelyanova N. N., Katunin A. V., Merkulova I. G., Pirozhkova S. V., Trufanova E. O., Shchedrina I. O., Yakovleva A. F. “Artificial Intelligence in the Research of Consciousness and in Social Life (in Honor of 70-Years Anniversary of A. Turing’s Paper ‘Computing Machinery and Intelligence’ (Papers of the ‘Round Table’)”. Filosofiya nayki i tekhniki = Philosophy of Science and Technology 27.1 (2022): 5—33. (In Russian). https://doi.org/10.21146/2413-9084-2022-27-1-5-33
2. Lektorsky V. A., Vasil’yev S. N., Makarov V. L., Khabrieva T. Ya., Kokoshin A. A., Ushakov D. V., Valuyeva E. A. et al. Human and the Artificial Intelligence Systems. St. Petersburg: Yuridicheskiy tsentr, 2022. 328 p. (In Russian).
3. Leshkevich T. G. “The Paradox of Trust in Artificial Intelligence and Its Rationale”. Filosofiya nayki i tekhniki = Philosophy of Science and Technology 28.1 (2023): 34—47. (In Russian). https://doi.org/10.21146/2413-9084-2023-28-1-34-47
4. Leshkevich T. G. “The Problem of Subjectivity of Neural Networks: Humans and Non-Humans”. Filosofiya nayki i tekhniki = Philosophy of Science and Technology 29.2 (2024): 125—135. (In Russian). https://doi.org/10.21146/2413-9084-2024-29-2-125-135
5. Madzhumder M. Sh., Begunova D. D. “Causes of Content Distortion: Analysis and Classification of Hallucinations in Large GPT Language Models”. Iskusstvenniy intellekt i prinyatie resheniy = Artificial Intelligence and Decision Making 3 (2024): 32—41. (In Russian). https://doi.org/10.14357/20718594240303
6. Morkina Ju. S. “Consciousness as an Antinomy (Antinomy of the Concept of Consciousness and the Philosophy of Artificial Intelligence)”. Filosofiya nayki i tekhniki = Philosophy of Science and Technology 29.1 (2024): 20—33. (In Russian). https://doi.org/10.21146/2413-9084-2024-29-1-20-33
7. Trufanova E. O. “The Subject: Challenges of the Digital”. Galactica Media: Journal of Media Studies 6.4 (2024): 215—234. (In Russian). https://doi.org/10.46539/gmd.v6i4.525
8. Khabrieva T. Ya. “Legal Issues of the Artificial Intelligence Identification”. Vestnik Rossijskoj akademii nauk 94.7 (2024): 609—622. (In Russian). https://doi.org/10.31857/S0869587324070015
9. Yastrebov O. A. “The Legal Capacity of Electronic Persons: Theoretical and Methodological Approaches”. Trudy Instituta gosudarstva i prava RAN = Proceedings of the Institute of State and Law 13.2 (2018): 36—55. (In Russian).
10. Floridi L. “A Proxy Culture”. Philosophy & Technology 28 (2015): 487—490. https://doi.org/10.1007/s13347-015-0209-8
11. Nissenbaum H. “How Computer Systems Embody Values”. Computer 34.3 (2001): 118—120. https://doi.org/10.1109/2.910905