Preview

Russian Journal of Philosophical Sciences

Advanced search

Subjectivity of Explainable Artificial Intelligence

https://doi.org/10.30727/0235-1188-2022-65-1-72-90

Abstract

The article addresses the problem of identifying methods to develop the ability of artificial intelligence (AI) systems to provide explanations for their findings. This issue is not new, but, nowadays, the increasing complexity of AI systems is forcing scientists to intensify research in this direction. Modern neural networks contain hundreds of layers of neurons. The number of parameters of these networks reaches trillions, genetic algorithms generate thousands of generations of solutions, and the semantics of AI models become more complicated, going to the quantum and non-local levels. The world’s leading companies are investing heavily in creating explainable AI (XAI). However, the result is still unsatisfactory: a person often cannot understand the “explanations” of AI because the latter makes decisions differently than a person, and perhaps because a good explanation is impossible within the framework of the classical AI paradigm. AI faced a similar problem 40 years ago when expert systems contained only a few hundred logical production rules. The problem was then solved by complicating the logic and building added knowledge bases to explain the conclusions given by AI. At present, other approaches are needed, primarily those that consider the external environment and the subjectivity of AI systems. This work focuses on solving this problem by immersing AI models in the social and economic environment, building ontologies of this environment, taking into account a user profile and creating conditions for purposeful convergence of AI solutions and conclusions to user-friendly goals.

About the Author

Alexander N. Raikov
Institute of Control Sciences, Russian Academy of Sciences; MIREA – Russian Technological University
Russian Federation

Alexander N. Raikov – D.Sc. in Technology, Leading Research Fellow, Institute of Control Sciences, Russian Academy of Sciences; Professor, MIREA – Russian Technological University.

Moscow



References

1. Abrosimov V.K. & Raikov A.N. (2022) Intelligent Agricultural Robots. Moscow: Kar’era Press (in Russian)

2. Vagin V.N., Golovina E.Y., Zagoryanskaya A.A., & Fomina M.V. (2004) Authentic and Plausible Inference in Intelligent Systems (V.N. Vagina & D.A. Pospelov, Eds.). Moscow: Fizmatlit (in Russian).

3. Dubrovsky D.I. (2021) The Task of Creating General Artificial Intelligence and the Problem of Consciousness. Russian Journal of Philosophical Sciences = Filosofskie nauki. Vol. 64, no. 1, pp. 13‒44 (in Russian).

4. Lepskiy V.E. (2021) Artificial Intelligence in Subjective Control Paradigms. Russian Journal of Philosophical Sciences = Filosofskie nauki. Vol. 64, no. 1, pp. 88‒101 (in Russian).

5. Raikov A.N. (2009) Prominences of Macroeconomics. Ekonomicheskie strategii. 2009. No. 7, pp. 42–49 (in Russian).

6. Aarts E.H.L. & Encarnação J.L. (Eds.) (2006) True Visions: The Emergence of Ambient Intelligence. Berlin: Springer.

7. Adadi A. & Berrada M. (2018) Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access. Vol. 6, pp. 52138–52160.

8. Arrieta A.B., Díaz-Rodríguez N., Del Ser J., Bennetot A., Tabik S., Barbado A., García S., Gil-López S., Molina D., Benjamins R., Chatila R. (2020) Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges towards Responsible AI. Information Fusion. Vol. 58, pp. 82‒115.

9. Byrne R.M.J. (2019) Counterfactuals in Explaining Artificial Intelligence (XAI): Evidence from Human Reasoning. Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI19), pp. 6276–6282.

10. Byrne R.M.J. (2019) Counterfactuals in Explaining Artificial Intelligence (XAI): Evidence from Human Reasoning. In: Proceedings of the TwentyEighth International Joint Conference on Artificial Intelligence (IJCAI-19) (pp. 6276–6282). California: International Joint Conferences on Artificial Intelligence.

11. Chen M., Wei Z., Huang Z., Ding B., & Li Y. (2020) Simple and Deep Graph Convolutional Networks. Proceedings of Machine Learning Research. Vol. 119, pp. 1725–1735.

12. Doshi-Velez F. & Kortz M. (2017) Accountability of AI Under the Law: The Role of Explanation. Berkman Klein Center Working Group on Explanation and the Law, Berkman Klein Center for Internet & Society Working Paper. Retrieved from http://nrs.harvard.edu/urn-3:HUL.InstRepos:34372584

13. Einstein A., Podolsky B., & Rosen N. (1935) Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? Physical Review. Vol. 47, no. 10, pp. 777–780.

14. Heaven W. D. (2020) Why asking an AI to explain itself can make things worse. MIT Technology Review. January 29. Retrieved from https://www.technologyreview.com/2020/01/29/304857/why-asking-an-ai-to-explain-itself-can-make-things-worse

15. Kaul N. (2022) 3Es for Al: Economics, Explanation, Epistemology. Frontiers in Artificial Intelligence. Vol. 5, article 833238.

16. Leavy S., Meaney G., Wade K., & Greene D. (2020) Mitigating Gender Bias in Machine Learning Sata Sets. In: International Workshop on Algorithmic Bias in Search and Recommendation (pp. 12–26). Cham: Springer.

17. Leavy S., Siapera E., & O’Sullivan B. (2021) Ethical Data Curation for AI: An Approach based on Feminist Epistemology and Critical Theories of Race. In: AIES’21: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 695‒703). New York: The Association for Computing Machinery.

18. Lepskiy V. (2018) Evolution of Cybernetics: Philosophical and Methodological Analysis. Kybernetes. Vol. 47, no. 2, pp. 249–261.

19. Lin Y.T., Hung T.W., & Huang L.T.L. (2021) Engineering Equity: How AI Can Help Reduce the Harm of Implicit Bias. Philosophy and Technology. Vol. 34, no. 1, pp. 65‒90.

20. Liu R., Balsubramani A., Zou J. (2019) Learning transport cost from subset correspondence. arXiv. Retrieved from https://arxiv.org/pdf/1909.13203.pdf

21. Madry A., Makelov A., Schmidt L., Tsipras D., & Vladu A. (2017) Towards deep learning models resistant to adversarial attacks. arXiv. Retrieved from https://arxiv.org/pdf/1706.06083.pdf

22. Mueller S.T., Hoffman R.R., Clancey W. Klein G. (2019) Explanation in Human-AI systems: A Literature Meta-Review Synopsis of Key Ideas and Publications and Bibliography for Explainable AI. DARPA XAI Literature Review. Retrieved from https://arxiv.org/pdf/1902.01876.pdf

23. Mueller S.T., Veinott E.S., Hoffman R.R., Klein G., Alam L., Mamun T., & Clancey W.J. (2020) Principles of Explanation in Human-AI Systems. Association for the Advancement of Artificial Intelligence. Retrieved from https://arxiv.org/pdf/2102.04972.pdf

24. Mumford S. & Anjium R.L. (2013) Causation. A Very Short Introduction. Oxford: Oxford University Press.

25. Nolin J. & Olson N. (2016) The Internet of Things and Convenience. Internet Research. Vol. 26, no. 2, pp. 360–376.

26. Orrell D. & Houshmand M. (2022) Quantum Propensity in Economics. Frontiers in Artificial Intelligence. Vol. 4, art. 772294.

27. Polya G. (1954) Mathematics and Plausible Reasoning. Princeton, NJ: Princeton University Press.

28. Raikov A. (2021) Cognitive Semantics of Artificial Intelligence: A New Perspective. Singapore: Springer.

29. Raikov A. (2022) Automating Cognitive Modelling Considering NonFormalisable Semantics. In: Nagar A.K., Jat D.S., Marín-Raventós G., & Mishra D.K. (Eds) Intelligent Sustainable Systems (Lecture Notes in Networks and Systems. Vol. 334). Singapore: Springer.

30. Rauber A., Trasarti R., & Gianotti F. (2019) Transparency in Algorithmic Decision Making. ERCIM News. Vol. 116, pp. 10–11. Retrieved from https://ercim-news.ercim.eu/en116/special/transparency-in-algorithmic-decision-making-introduction-to-the-special-theme

31. The BIG Bell Test Collaboration. (2018) Challenging Local Realism with Human Choices. Nature. Vol. 557, no. 7704, pp. 212–216.

32. Veličković P., Ying R., Padovano M., Hadsell R., & Blundell C. (2019) Neural Execution of Graph Algorithms. In: 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, 2019. Retrieved from https://grlearning.github.io/papers/88.pdf

33. Wang J., Li Z., Long Q., Zhang W., Song G., & Shi C. (2020) Learning Node Representations from Noisy Graph Structures. In: 2020 IEEE International Conference on Data Mining (pp. 1310–1315). Los Alamitos, CA: IEEE Computer Society.


Review

For citations:


Raikov A.N. Subjectivity of Explainable Artificial Intelligence. Russian Journal of Philosophical Sciences. 2022;65(1):72-90. (In Russ.) https://doi.org/10.30727/0235-1188-2022-65-1-72-90



Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 0235-1188 (Print)
ISSN 2618-8961 (Online)