<!DOCTYPE article
PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.4 20190208//EN"
       "JATS-journalpublishing1.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="1.4" xml:lang="en">
 <front>
  <journal-meta>
   <journal-id journal-id-type="publisher-id">Economic and Social Research</journal-id>
   <journal-title-group>
    <journal-title xml:lang="en">Economic and Social Research</journal-title>
    <trans-title-group xml:lang="ru">
     <trans-title>Экономические и социально-гуманитарные исследования</trans-title>
    </trans-title-group>
   </journal-title-group>
   <issn publication-format="print">2409-1073</issn>
   <issn publication-format="online">3033-5442</issn>
  </journal-meta>
  <article-meta>
   <article-id pub-id-type="publisher-id">98848</article-id>
   <article-id pub-id-type="edn">QVIFKL</article-id>
   <article-categories>
    <subj-group subj-group-type="toc-heading" xml:lang="ru">
     <subject>Итоги круглого стола Института ВП СГН «Когнитивные технологии: философский аспект»</subject>
    </subj-group>
    <subj-group subj-group-type="toc-heading" xml:lang="en">
     <subject>Results of Round table of the Institute of HTL SSH “Cognitive technologies: Philosophical aspect”</subject>
    </subj-group>
    <subj-group>
     <subject>Итоги круглого стола Института ВП СГН «Когнитивные технологии: философский аспект»</subject>
    </subj-group>
   </article-categories>
   <title-group>
    <article-title xml:lang="en">Artificial Intelligence: On M. Gabriel’s “New Ethics”</article-title>
    <trans-title-group xml:lang="ru">
     <trans-title>Искусственный интеллект: о «новой этике» М. Габриэля</trans-title>
    </trans-title-group>
   </title-group>
   <contrib-group content-type="authors">
    <contrib contrib-type="author">
     <contrib-id contrib-id-type="orcid">https://orcid.org/0000-0003-1721-6388</contrib-id>
     <name-alternatives>
      <name xml:lang="ru">
       <surname>Прись</surname>
       <given-names>Игорь Евгеньевич</given-names>
      </name>
      <name xml:lang="en">
       <surname>Pris</surname>
       <given-names>I. E.</given-names>
      </name>
     </name-alternatives>
     <email>frigpr@gmail.com</email>
     <bio xml:lang="ru">
      <p>кандидат физико-математических наук;</p>
     </bio>
     <bio xml:lang="en">
      <p>candidate of physical and mathematical sciences;</p>
     </bio>
     <xref ref-type="aff" rid="aff-1"/>
    </contrib>
   </contrib-group>
   <aff-alternatives id="aff-1">
    <aff>
     <institution xml:lang="ru">Институт философии Национальной академии наук Беларуси</institution>
     <city>Minsk</city>
     <country>BY</country>
    </aff>
    <aff>
     <institution xml:lang="en">Institute of Philosophy</institution>
     <city>Minsk</city>
     <country>BY</country>
    </aff>
   </aff-alternatives>
   <pub-date publication-format="print" date-type="pub" iso-8601-date="2025-07-29T21:16:44+03:00">
    <day>29</day>
    <month>07</month>
    <year>2025</year>
   </pub-date>
   <pub-date publication-format="electronic" date-type="pub" iso-8601-date="2025-07-29T21:16:44+03:00">
    <day>29</day>
    <month>07</month>
    <year>2025</year>
   </pub-date>
   <volume>12</volume>
   <issue>2</issue>
   <fpage>137</fpage>
   <lpage>145</lpage>
   <history>
    <date date-type="received" iso-8601-date="2025-12-10T00:00:00+03:00">
     <day>10</day>
     <month>12</month>
     <year>2025</year>
    </date>
    <date date-type="accepted" iso-8601-date="2025-02-06T00:00:00+03:00">
     <day>06</day>
     <month>02</month>
     <year>2025</year>
    </date>
   </history>
   <self-uri xlink:href="https://esgi-journal.ru/en/nauka/article/98848/view">https://esgi-journal.ru/en/nauka/article/98848/view</self-uri>
   <abstract xml:lang="ru">
    <p>Критически оценивается «новая этика» искусственного интеллекта, предложенная М. Габриэлем. Утверждается, что в отличие от интеллекта человека искусственный интеллект лишен нормативного измерения или, что эквивалентно, чувствительности к контексту. Автор показывает противоречие между точкой зрения М. Габриэля и реалистическим контекстуальным подходом к этике у Ж. Бенуа, и моральным реализмом Т. Уильямсона, согласно которым первичны не принципы, а моральное восприятие в контексте, парадигматические примеры морального знания. Сравниваются подходы к пониманию искусственного интеллекта М. Габриэля, Д. Андлера, Л. Флориди, С. Рассела. Доказывается целесообразность принципа умеренности Д. Андлера. Реалистическая концепция искусственного интеллекта (ИИ) противопоставляется идеалистической концепции.</p>
   </abstract>
   <trans-abstract xml:lang="en">
    <p>The “new ethics” of artificial intelligence proposed by M. Gabriel is critically evaluated. It is argued that, unlike human intelligence, artificial intelligence is devoid of normative dimension, or, equivalently, context sensitivity. The author shows the contradiction between M. Gabriel’s viewpoint and J. Benoist’s realistic contextual approach to ethics, and T. Williamson’s moral realism, according to which it is not principles that are primary, but moral perception in context, paradigmatic examples of moral knowledge. Approaches to artificial intelligence understanding by M. Gabriel, D. Andler, L. Floridi, and S. Russell are compared. The feasibility of D. Andler’s moderation principle has been proved. The realistic concept of artificial intelligence (AI) is opposed to the idealistic concept.</p>
   </trans-abstract>
   <kwd-group xml:lang="ru">
    <kwd>искусственный интеллект</kwd>
    <kwd>этика ИИ</kwd>
    <kwd>Габриэль</kwd>
    <kwd>моральный прогресс</kwd>
    <kwd>автономия</kwd>
    <kwd>контекст</kwd>
    <kwd>нормативность</kwd>
    <kwd>моральный реализм</kwd>
    <kwd>принцип умеренности</kwd>
   </kwd-group>
   <kwd-group xml:lang="en">
    <kwd>artificial intelligence</kwd>
    <kwd>AI ethics</kwd>
    <kwd>Gabriel</kwd>
    <kwd>moral progress</kwd>
    <kwd>autonomy</kwd>
    <kwd>context</kwd>
    <kwd>normativity</kwd>
    <kwd>moral realism</kwd>
    <kwd>moderation principle</kwd>
   </kwd-group>
   <funding-group>
    <funding-statement xml:lang="ru">Работа выполнена в рамках НИР «Сознание и искусственный интеллект в условиях цифровых трансформаций: научно-методологический и социогуманитарный аспекты» Института философии НАН Беларуси.</funding-statement>
    <funding-statement xml:lang="en">The work has been carried out within the project “Consciousness and Artificial Intelligence in the conditions of digital transformations: Scientific, methodological and socio-humanitarian aspects” of Institute of Philosophy of NAS of Belarus.</funding-statement>
   </funding-group>
  </article-meta>
 </front>
 <body>
  <p></p>
 </body>
 <back>
  <ref-list>
   <ref id="B1">
    <label>1.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Прись И. Е. «Искусственный интеллект и неоэкзистенциализм». Философия в XXI веке: направления и тенденции развития: материалы II Междунар. науч.-практ. конф. (Москва, Зеленоград — Красноярск, 12 апреля 2024): в 3 ч. Под общ. ред. Н. В. Даниелян. Ч. 2. М.: МИЭТ, 2024a. 159—169. EDN: FOARAA.</mixed-citation>
     <mixed-citation xml:lang="en">Pris I. E. “Artificial Intelligence and Neo-Existentialism”. Filosofiya v XXI veke: napravleniya i tendentsii razvitiya: materialy II Mezhdunar. nauch.-prakt. konf. (Moskva, Zelenograd — Krasnoyarsk, 12 aprelya 2024). Gen. ed. N. V. Danielyan. Moscow: MIET, 2024a. 159—169. (In Russian). 3 parts.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B2">
    <label>2.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Прись И. Е. «Искусственный интеллект — не интеллект и никогда им не будет». Наука и инновации 9 (259) (2024b): 26—29. EDN: CDNTHD.</mixed-citation>
     <mixed-citation xml:lang="en">Pris I E. “Artificial Intelligence Is Not Intelligence and Never Will Be”. Nauka i innovatsii = Science and Innovations 9 (259) (2024b): 26—29. (In Russian).</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B3">
    <label>3.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Прись И. Е. «Квантовоподобное моделирование и его философские основания». Философия науки 3 (102) (2024c): 109—129. https://doi.org/10.15372/PS20240307. EDN: VJUFIK.</mixed-citation>
     <mixed-citation xml:lang="en">Pris I. E. “Quantum-Like Modeling and its Philosophical Foundations”. Filosofiya nauki = Philosophy of Science 3 (102) (2024c): 109—129. (In Russian). https://doi.org/10.15372/PS20240307</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B4">
    <label>4.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Прись И. Е. «Контекстуальный моральный реализм». Сибирский философский журнал 21.4 (2023): 5—28. https://doi.org/10.25295/2541-7517-2023-21-4-5-28. EDN: MIEJMS.</mixed-citation>
     <mixed-citation xml:lang="en">Pris I. E. “Contextual Moral Realism”. Sibirskij filosofskij žurnal = Siberian Journal of Philosophy 21.4 (2023): 5—28. (In Russian). https://doi.org/10.25295/2541-7517-2023-21-4-5-28</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B5">
    <label>5.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Andler D. Intelligence artificielle, intelligence humaine: la double énigme. Paris: Gallimard, 2023. 432 p.</mixed-citation>
     <mixed-citation xml:lang="en">Andler D. Intelligence artificielle, intelligence humaine: la double énigme. Paris: Gallimard, 2023. 432 p.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B6">
    <label>6.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Andler D. “The Normativity of Context”. Philosophical Studies 100.3 (2000): 273—303. https://doi.org/10.1023/A:1018628709589</mixed-citation>
     <mixed-citation xml:lang="en">Andler D. “The Normativity of Context”. Philosophical Studies 100.3 (2000): 273—303. https://doi.org/10.1023/A:1018628709589</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B7">
    <label>7.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Brey P., Dainow B. “Ethics by Design for Artificial Intelligence”. AI and Ethics 4.4 (2024): 1265—1277. https://doi.org/10.1007/s43681-023-00330-4</mixed-citation>
     <mixed-citation xml:lang="en">Brey P., Dainow B. “Ethics by Design for Artificial Intelligence”. AI and Ethics 4.4 (2024): 1265—1277. https://doi.org/10.1007/s43681-023-00330-4</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B8">
    <label>8.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Chakraborty A., Bhuyan N. “Can Artificial Intelligence Be a Kantian Moral Agent? On Moral Autonomy of AI System”. AI and Ethics 4 (2024): 325—331. https://doi.org/10.1007/s43681-023-00269-6</mixed-citation>
     <mixed-citation xml:lang="en">Chakraborty A., Bhuyan N. “Can Artificial Intelligence Be a Kantian Moral Agent? On Moral Autonomy of AI System”. AI and Ethics 4 (2024): 325—331. https://doi.org/10.1007/s43681-023-00269-6</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B9">
    <label>9.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Chalmers D. “The Singularity: A Philosophical Analysis”. Journal of Consciousness Studies 17.9-10 (2010): 7—65.</mixed-citation>
     <mixed-citation xml:lang="en">Chalmers D. “The Singularity: A Philosophical Analysis”. Journal of Consciousness Studies 17.9-10 (2010): 7—65.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B10">
    <label>10.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Coeckelbergh M. AI Ethics. Cambridge, MA: The MIT Press, 2020. 248 p.</mixed-citation>
     <mixed-citation xml:lang="en">Coeckelbergh M. AI Ethics. Cambridge, MA: The MIT Press, 2020. 248 p.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B11">
    <label>11.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Evans R. “2 The Apperception Engine”. Kim H., Schönecker D., eds. Kant and Artificial Intelligence. Berlin: De Gruyter, 2022. 39—103. https://doi.org/10.1515/9783110706611-002</mixed-citation>
     <mixed-citation xml:lang="en">Evans R. “2 The Apperception Engine”. Kim H., Schönecker D., eds. Kant and Artificial Intelligence. Berlin: De Gruyter, 2022. 39—103. https://doi.org/10.1515/9783110706611-002</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B12">
    <label>12.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Floridi L. The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities. Oxford: Oxford Up, 2023. 272 p.</mixed-citation>
     <mixed-citation xml:lang="en">Floridi L. The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities. Oxford: Oxford Up, 2023. 272 p.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B13">
    <label>13.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Gabriel M. Der Sinn des Denkens. Berlin: Ullstein, 2018. 368 S.</mixed-citation>
     <mixed-citation xml:lang="en">Gabriel M. Der Sinn des Denkens. Berlin: Ullstein, 2018. 368 S.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B14">
    <label>14.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Gabriel M. Moralischer Fortschritt in dunklen Zeiten: Universale Werte für das 21. Jahrhundert. Berlin: Ullstein, 2020. 369 S.</mixed-citation>
     <mixed-citation xml:lang="en">Gabriel M. Moralischer Fortschritt in dunklen Zeiten: Universale Werte für das 21. Jahrhundert. Berlin: Ullstein, 2020. 369 S.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B15">
    <label>15.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Gudmunsen Z. “The Moral Decision Machine: A Challenge for Artificial Moral Agency Based on Moral Deference”. AI and Ethics 5 (2025): 1033—1045. https://doi.org/10.1007/s43681-024-00444-3</mixed-citation>
     <mixed-citation xml:lang="en">Gudmunsen Z. “The Moral Decision Machine: A Challenge for Artificial Moral Agency Based on Moral Deference”. AI and Ethics 5 (2025): 1033—1045. https://doi.org/10.1007/s43681-024-00444-3</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B16">
    <label>16.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Huang L. T.-L., Papyshev G., Wong J. K. “Democratizing Value Alignment: From Authoritarian to Democratic”. AI and Ethics 5 (2025): 11—18. https://doi.org/10.1007/s43681-024-00624-1</mixed-citation>
     <mixed-citation xml:lang="en">Huang L. T.-L., Papyshev G., Wong J. K. “Democratizing Value Alignment: From Authoritarian to Democratic”. AI and Ethics 5 (2025): 11—18. https://doi.org/10.1007/s43681-024-00624-1</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B17">
    <label>17.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Kim H., Schönecker D., eds. Kant and Artificial Intelligence. Berlin: De Gruyter, 2022. v, 290 p. https://doi.org/10.1515/9783110706611</mixed-citation>
     <mixed-citation xml:lang="en">Kim H., Schönecker D., eds. Kant and Artificial Intelligence. Berlin: De Gruyter, 2022. v, 290 p. https://doi.org/10.1515/9783110706611</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B18">
    <label>18.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Lindner F., Bentzen M. M. “A Formalization of Kant’s Second Formulation of the Categorical Imperative”. arXiv. Rev. 11 July 2019. Web. 15 May 2025. https://doi.org/10.48550/arXiv.1801.03160</mixed-citation>
     <mixed-citation xml:lang="en">Lindner F., Bentzen M. M. “A Formalization of Kant’s Second Formulation of the Categorical Imperative”. arXiv. Rev. 11 July 2019. Web. 15 May 2025. https://doi.org/10.48550/arXiv.1801.03160</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B19">
    <label>19.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">McDonald F. J. “AI, Alignment, and the Categorical Imperative”. AI and Ethics 3 (2023): 337—344. https://doi.org/10.1007/s43681-022-00160-w</mixed-citation>
     <mixed-citation xml:lang="en">McDonald F. J. “AI, Alignment, and the Categorical Imperative”. AI and Ethics 3 (2023): 337—344. https://doi.org/10.1007/s43681-022-00160-w</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B20">
    <label>20.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Perez‑Escobar J. A., Sarikaya D. “Philosophical Investigations into AI Alignment: A Wittgensteinian Framework”. Philosophy &amp; Technology 37.3 (2024): 80. https://doi.org/10.1007/s13347-024-00761-9</mixed-citation>
     <mixed-citation xml:lang="en">Perez‑Escobar J. A., Sarikaya D. “Philosophical Investigations into AI Alignment: A Wittgensteinian Framework”. Philosophy &amp; Technology 37.3 (2024): 80. https://doi.org/10.1007/s13347-024-00761-9</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B21">
    <label>21.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Powers T. M. “Prospects for a Kantian Machine”. IEEE Intelligent System 21.4 (2006): 46—51. https://doi.org/10.1109/MIS.2006.77</mixed-citation>
     <mixed-citation xml:lang="en">Powers T. M. “Prospects for a Kantian Machine”. IEEE Intelligent System 21.4 (2006): 46—51. https://doi.org/10.1109/MIS.2006.77</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B22">
    <label>22.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Russell S. Human Compatible. Artificial Intelligence and the Problem of Control. New York: Penguin Books, 2020. 352 p.</mixed-citation>
     <mixed-citation xml:lang="en">Russell S. Human Compatible. Artificial Intelligence and the Problem of Control. New York: Penguin Books, 2020. 352 p.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B23">
    <label>23.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Schlicht T. “1 Minds, Brains, and Deep Learning: The Development of Cognitive Science through the Lens of Kant’s Approach to Cognition”. Kim H., Schönecker D., eds. Kant and Artificial Intelligence. Berlin: De Gruyter, 2022. 3—38. https://doi.org/10.1515/9783110706611-001</mixed-citation>
     <mixed-citation xml:lang="en">Schlicht T. “1 Minds, Brains, and Deep Learning: The Development of Cognitive Science through the Lens of Kant’s Approach to Cognition”. Kim H., Schönecker D., eds. Kant and Artificial Intelligence. Berlin: De Gruyter, 2022. 3—38. https://doi.org/10.1515/9783110706611-001</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B24">
    <label>24.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Shanker S. G. Wittgenstein’s Remarks on the Foundations of AI. London: Routledge, 1998. xvi, 280 p.</mixed-citation>
     <mixed-citation xml:lang="en">Shanker S. G. Wittgenstein’s Remarks on the Foundations of AI. London: Routledge, 1998. xvi, 280 p.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B25">
    <label>25.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Simion M., Kelp Ch. “Trustworthy Artificial Intelligence”. Asian Journal of Philosophy 2 (2023): 8. https://doi.org/10.1007/s44204-023-00063-5</mixed-citation>
     <mixed-citation xml:lang="en">Simion M., Kelp Ch. “Trustworthy Artificial Intelligence”. Asian Journal of Philosophy 2 (2023): 8. https://doi.org/10.1007/s44204-023-00063-5</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B26">
    <label>26.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Williamson T. “Unexceptional Moral Knowledge”. Journal of Chinese Philosophy 49.4 (2022): 405—415. https://doi.org/10.1163/15406253-12340082</mixed-citation>
     <mixed-citation xml:lang="en">Williamson T. “Unexceptional Moral Knowledge”. Journal of Chinese Philosophy 49.4 (2022): 405—415. https://doi.org/10.1163/15406253-12340082</mixed-citation>
    </citation-alternatives>
   </ref>
  </ref-list>
 </back>
</article>
