DOI: https://doi.org/10.32515/2664-262X.2022.5(36).1.119-124

Risks of Implementing Artificial Intelligence in Computer Systems

Konstantyn Marchenko, Oleh Oryshaka, Anzhelyka Marchenko, Anna Melnick

About the Authors

Konstantyn Marchenko, Associate Professor, PhD in Technics (Candidate of Technics Sciences), Central Ukraіnian National Technical University, Kropyvnytskyi, Ukraine, e-mail: k_marchenko@i.ua, ORCID ID: 0000-0001-6269-5379

Oleh Oryshaka, Associate Professor, PhD in Technics (Candidate of Technics Sciences), Central Ukraіnian National Technical University, Kropyvnytskyi, Ukraine, e-mail: oryhsaka@gmail.com

Anzhelyka Marchenko, student, Central Ukraіnian National Technical University, Kropyvnytskyi, Ukraine,

Anna Melnick, student, Central Ukraіnian National Technical University, Kropyvnytskyi, Ukraine,

Abstract

Since the absolute reliability of computer systems and the results of information processes that run in them can not be guaranteed, the task of research is to identify critical areas where such errors and failures are unacceptable. The main problems with the introduction of artificial intelligence in computer systems are the inability to predict all real situations and program the behavior of the machine adequately to them, lack of reliability and software errors. The input on which artificial intelligence is taught may be incorrect. In addition, artificial intelligence systems are influenced by the way of thinking and values of its developers, who are not always familiar with psychology, sociology and other humanities. These shortcomings during the use of artificial intelligence systems have led to many incidents, including fatal. The analysis of the sample of artificial intelligence error messages allowed us to determine which areas are critical errors, ie where the use of artificial intelligence systems is associated with significant risk. In particular, these are such areas as medicine, military affairs, transport, manufacturing, where people and robotic systems cooperate, hazardous industries, energy, social management, legal institutions and more. Currently, there is no regulatory and legal framework for the use of artificial intelligence, so its implementation is spontaneous, which leads to unpredictable results and accidents. Artificial intelligence used in critical infrastructures, in areas related to human health and life, belongs to the category of high risk. Based on the analysis and due to the impossibility of ensuring the absolute reliability of computer systems and their software, the authors do not recommend the use of artificial intelligence in areas related to safety, health and human life, especially large human teams. Devices using artificial intelligence systems should be marked with messages about its use with a clear warning about the partial reliability of the device in terms of safety and consumer responsibility for the use of such a device. The authors strongly discourage the use of artificial intelligence in responsible decision-making in areas related to the security of large groups of people.

Keywords

information processing, computer systems, computer systems, algorithm, software, reliability, artificial intelligence, risks, safety of life, labor protection

Full Text:

PDF

References

1. Kompiuterna systema. Material Vikipedii. Retrieved from https://uk.wikipedia.org/wiki/Katehoriia:Komp%27iuterni_systemy

2. Malkov, M.V. O nadezhnosty ynformatsyonnыkh system . Trudы Kolskoho nauchnoho tsentra RAN. 2012. T. 3, №4. S. 49-58.

3. Nadёzhnost (kompiuternыe nauky). Materyal Vykypedyy. Retrieved from https://ru.wikipedia.org/wiki/Nadёzhnost_(kompiuternыe_nauky)

4. Bыchkov S. S. Povыshenye urovnia nadezhnosty ynformatsyonnыkh system . Vestn. SybHAU ym. M. F. Reshetneva. 2014. № 3. S. 42-47.

5. Kazaryn O. V., Shubynskyi Y. B. Nadezhnost y bezopasnost prohrammnoho obespechenyia : ucheb. posobye dlia bakalavryata y mahystraturы. M. : Yzdatelstvo Yurait, 2018. 342 s.

6. Katastrofycheskye posledstvyia prohrammnыkh oshybok. www.pvsm.ru. Retrieved from https://www.pvsm.ru/programmirovanie/241956

7. YY posovetoval patsyentu umeret: samыe krupnыe oshybky mashynnoho obuchenyia. Retrieved from https://hightech.fm/2021/09/02/ai-failures

8. «Plokho obuchennыi yskusstvennыi yntellekt opasnee vosstanyia mashyn». Retrieved from https://www.hse.ru/news/expertise/506082229.html

9. Zhukov L. Pochemu liudy v blyzhaishem budushchem ne smohut polnostiu doverytsia YY. Retrieved from https://trends.rbc.ru/trends/industry/5fb52daf9a7947234c4d28d3

10. Andrash Yu. Kto neset otvetstvennost za prestuplenyia yskusstvennoho yntellekta? Retrieved from https://www.lansky.at/ru/newsroom/news-media/zhurnal-lgp-news-022021/kto-neset-otvetstvennost-za-prestuplenija/#

GOST Style Citations

  1. Комп’ютерна система. Матеріал Вікіпедії. URL: https://uk.wikipedia.org/wiki/Категорія:Комп%27ютерні_системи (дата звернення: 06.03.2022)
  2. Мальков, М.В. О надежности информационных систем . Труды Кольского научного центра РАН. 2012. Т. 3, №4. С. 49-58.
  3. Надёжность (компьютерные науки). Материал Википедии. URL: https://ru.wikipedia.org/wiki/Надёжность_(компьютерные_науки) (дата звернення: 06.03.2022)
  4. Бычков С. С. Повышение уровня надежности информационных систем . Вестн. СибГАУ им. М. Ф. Решетнева. 2014. № 3. С. 42-47.
  5. Казарин О. В., Шубинский И. Б. Надежность и безопасность программного обеспечения : учеб. пособие для бакалавриата и магистратуры. М. : Издательство Юрайт, 2018. 342 с.
  6. Катастрофические последствия программных ошибок. Стаття на сайті https://www.pvsm.ru/programmirovanie/241956 (дата звернення: 06.03.2022)
  7. ИИ посоветовал пациенту умереть: самые крупные ошибки машинного обучения. URL: https://hightech.fm/2021/09/02/ai-failures (дата звернення: 06.03.2022)
  8. «Плохо обученный искусственный интеллект опаснее восстания машин». URL: https://www.hse.ru/news/expertise/506082229.html (дата звернення: 07.03.2022)
  9. Жуков Л. Почему люди в ближайшем будущем не смогут полностью довериться ИИ. URL: https://trends.rbc.ru/trends/industry/5fb52daf9a7947234c4d28d3 (дата звернення: 07.03.2022)
  10. Андраш Ю. Кто несет ответственность за преступления искусственного интеллекта? URL: https://www.lansky.at/ru/newsroom/news-media/zhurnal-lgp-news-022021/kto-neset-otvetstvennost-za-prestuplenija/# (дата звернення: 07.03.2022)
Copyright (c) 2022 Konstantyn Marchenko, Oleh Oryshaka, Anzhelyka Marchenko, Anna Melnick