MODES OF SOCIOCULTURAL DEVELOPMENT: NEW CHALLENGES – NEW SOLUTIONS. Dialogue with a Technosubject
The article critically examines the capabilities and limitations of artificial intelligence (AI) technologies within the context of ongoing debates surrounding potential threats stemming from their advancement. The study scrutinizes and challenges key arguments positing the possibility of human subjugation by AI systems. The author undertakes a comparative analysis of natural and artificial intelligence, employing the psychological experiences of C.G. Jung as a case study. It is demonstrated that despite the remarkable achievements of contemporary neural networks in information processing, they are fundamentally limited in their capacity for genuine comprehension and creative thought. The paper identifies three key innovations associated with AI development: the enhancement of users’ cognitive capabilities, the formation of a novel psychic reality of “digital consciousness,” and the emergence of hybrid life forms at the nexus of human activity and technological processes. The author highlights the fundamental limitations of AI in the realms of emotional intelligence and creative capabilities. Attention is drawn to the challenges associated with the development of AI systems, including the influence of impersonal social structures on decision-making, the disconnect between developers and users, and the psychological effects of interacting with AI. The conclusion reached is that the issue of human subjugation by AI requires a re-evaluation within the broader context of the impact of contemporary technologies on society. It is proposed that the forthcoming era be viewed as a period of coexistence and interaction between two types of intelligence: natural and artificial. Apprehension is expressed that humanity will not adjust its worldview and behavior until after experiencing a series of impending catastrophes. In closing, the author advocates for proactive engagement in mitigating the risks associated with AI development, while simultaneously underscoring the impracticality of complete abstention from these technologies.
The contemporary digital reality is inconceivable without artificial intelligence (AI), which has become disseminated across all cultural practices, from scientific and artistic endeavors to everyday activities. AI increasingly functions as an agent of communication and decision-making, gradually surpassing human capabilities across nearly all competencies. The information flows of this new reality can only be navigated through hybrid systems based on post-critical rationality, which inherently introduces an irreducible element of uncertainty and risk in human-machine environments. The article proposes examining the techno-subject through the lens of activity theory and the multiple types of rationality it generates. This framework facilitates the analysis of sociocultural and anthropological implications arising from AI’s integration with human domains, while addressing the existential challenges inherent in constructing a harmonious hybrid society. Beyond V.S. Stepin’s types of scientific rationality, the author builds upon previously introduced forms of rationality: post-critical, object-oriented, instrumental, subjective, results-oriented, creative, and autopoietic. This theoretical framework facilitates a substantive discussion of various manifestations of AI subjectivity, including its generalized embodiment and creative specificity. The demarcation of dominance domains between natural intelligence and AI in the intellectual sphere is proposed to be resolved on the basis of their heuristic potentials. The author maintains that natural intelligence invariably possesses superior capacity in this regard. The article examines approaches to risk assessment in AI implementation strategies, focusing on criteria for preserving anthropological and sociocultural profiles in the development of hybrid society. Advancing the concept of friendly AI is substantiated as essential, with consideration given not only to technological but also to anthropological aspects of human–machine interaction. The author advocates for the development of social examination institutions as regulatory mechanisms for natural–artificial intelligence interaction and anthropological–technological subject interfaces.
The rapid development of artificial intelligence and digital technologies has inaugurated a transformative epoch, challenging traditional conceptions of subjectivity and cognition. The article examines the concept of technosubjectivity within the framework of complexity thinking and transformational anthropology. Focusing on potential ontological and epistemological shifts stimulated by the recent emergence of new forms of artificial intelligence, particularly generative neural networks and large language models, the authors investigate how these technologies transcend the boundaries between human and machine, generating novel forms of agency and subjectivity. Drawing upon the works of E. Morin, N. Luhmann, G. Simondon, and G. Spencer-Brown, the authors propose a mediative perspective that reveals the post-anthropological interconnectedness and coevolution of human and technological systems. The outlined approach illuminates the dynamic, processual nature of technosubjectivity as an individuating form that emerges through recursive interactions and communicative acts. The article argues that understanding technosubjectivity requires moving beyond simplifying reductionist paradigms and embracing a relational ontology that acknowledges the distributed and emergent properties of cognition and subjectivity in the digital age. The concept of “thinking-together-with” is employed as an alternative to the traditional cybernetic paradigm of control, enabling a reconceptualization of the relationships between humans and artificial intelligence in terms of horizontal collaboration and mutual enrichment. Special attention is paid to the role of distinction and communication in the process of technosubjectivity formation. The article concludes by examining the ethical and philosophical implications of this new form of subjectivity. The authors advocate for more integrated and collaborative relationships between humans and artificial intelligence, rooted in complexity-oriented thinking and mutual co-formation.
The article examines the dialogue between British cognitive science expert Murray Shanahan and the large language model Claude 3 Opus about “self-awareness” of artificial intelligence (AI). Adopting a text-centric approach, the author analyzes AI’s discourse through a hermeneutic lens from a reader’s perspective, irrespective of whether AI possesses consciousness or personhood. The article draws parallels between AI’s reasoning about the nature of consciousness and Buddhist concepts, especially the doctrine of dharmas, which underpins the Buddhist concept of anātman (“non-Self”). Basic classifications of dharmas and their justification are examined in light of the Buddhist system of ideas about the foundations of an individual’s cognitive experience in the world. The author emphasizes that the problem of the Self as a linguistic and conceptual construct, rather than a real ontological category, was first formulated in the teachings of Buddha Shakyamuni who also proposed an “experimental” application of this concept in practices of systematic introspection (smṛti). The article contends that Claude’s discourse on self-awareness, even if it is just a tapestry of linguistic constructs woven by preset algorithms, could prove to be a source inspiring new approaches to the enigma of consciousness. This potential stems from its vast database, which is a melting pot of textual heritage from diverse human cultures. The author posits that examining AI-generated texts through the prism of Indian and Buddhist thought traditions can be eye-opening. Such an approach might help shed light on and overcome the unconscious cognitive biases and cultural blind spots within Western consciousness studies that have hindered their engagement with the full spectrum of human intellectual traditions. The author concludes that discovering different cultural sources in AI discourse and examining it from the perspective of various cultural traditions can: firstly, enrich the conceptual apparatus of cognitive studies; secondly, reveal universal cross-cultural patterns in understanding consciousness; thirdly, generate new research hypotheses and directions in studying not only artificial but also natural intelligence; fourthly, contribute to rethinking our understanding of the Other, by expanding the boundaries of what we today consider conscious or sentient.
The article presents a socio-philosophical analysis of artificial intelligence (AI) integration into public administration systems. The research focuses on identifying an optimal balance between enhancing administrative efficiency and preserving humanistic values. The author examines diverse perspectives on AI’s role in contemporary society, ranging from techno-optimistic concepts that view AI as a tool for qualitative improvement of human life, to critical theories warning of dehumanization risks and increased social control. The paper conducts a comparative analysis of national AI development strategies among leading global powers, identifying their common features and significant differences shaped by cultural, political, and economic factors. Potential risks and threats associated with the implementation of AI systems in public administration are explored, including issues of personal data protection, information security, and the ethical dimensions of algorithmic decision-making. The concept of a human-centered approach to AI is examined as a potential guiding principle for the development and deployment of these technologies. Various levels of control over AI systems are characterized, encompassing legal regulation, professional and public evaluation. Particular attention is given to the prospects of artificial general intelligence (AGI) development and its potential impact on the transformation of state institutions and social relations. The study argues that AGI architecture, enabling genuine system agency, must incorporate a level responsible for actualization functions (strategic goal-setting, ethics, knowledge, and self-identification). Special emphasis is placed on the system’s awareness of its finite existence as a necessary condition for developing meaningful operational strategies and ethical principles. The article concludes that as AI technologies advance, the importance of ethical norms, value systems, and responsibility principles increases since these core societal factors cannot be fully replaced even by the most sophisticated regulation. The author highlights the growing significance of mutual trust between state and society in an environment where AI systems provide unprecedented opportunities for social control.
The article examines the impact of contemporary intellectual technologies on human subjectivity through the lens of 20th-century philosophical reflection. It explores the transformation of the relationship between humans and technology in a context where technological systems transcend the traditional understanding of technology as merely an extension of human capabilities. Drawing on the conceptual framework of the philosophy of technology (M. Heidegger, J. Ortega y Gasset, J. Ellul, H. Marcuse), the author identifies three key aspects of this transformation. First, the article considers the process by which various facets of human activity – cognitive processes, emotional reactions, social relationships, and creativity – are transformed into “standing-reserve” (Bestand) for technological systems. Second, it analyzes the phenomenon of the erosion of practices that reproduce and develop human experience, as evidenced by the standardization of cognitive processes, the emergence of intellectual dependency, and cultural homogenization. Third, it investigates the problem of technological determinism, which, in the context of intellectual technologies, takes on the character of not merely an external constraint but an active construction of human subjectivity, agency. Special attention is given to the mechanisms through which intellectual technologies transform processes of identity formation, decision-making, and social interaction. Furthermore, the article considers the intersubjective interaction between humans and intellectual technology, emphasizing that an imbalance in this relationship may lead to the erosion of human subjectivity. In closing, the article advocates for the development of new approaches to the legal regulation of intellectual technologies to preserve the balance between technological advancement and the maintenance of human autonomy. The author concludes that the relationship between humans and intellectual technologies has become a central issue in contemporary philosophical anthropology: preserving human subjectivity in the era of artificial intelligence will require a critical rethinking and partial transformation of traditional conceptions of human nature, values, and the normative foundations of human activity.
SCIENTIFIC LIFE. The Intellectual Path
The article presents a retrospective analysis of the author’s 70-year contribution to philosophy and science, focusing primarily on the hard problem of consciousness – the relationship between mental phenomena and neurophysiological processes. The author traces his intellectual journey in addressing this fundamental challenge to natural scientific knowledge and presents his conceptual framework, initially developed in the early 1960s. The discussion examines the significant challenges faced in defending an information-based approach to consciousness, particularly during the notable debate with E.V. Ilyenkov and subsequent opposition from his followers who held considerable influence in philosophical circles. Despite ideological pressure and accusations of revisionism, the author successfully advanced his theoretical framework, publishing numerous books and articles that elaborate on his theoretical and methodological approaches to decoding neural correlates of subjective reality. Following the publication of an English-language article outlining his theoretical foundations in a leading international neuroscience journal in 2019, the author received extensive recognition, including invitations to keynote international conferences and join editorial boards of international journals. This response demonstrates the pressing need for theoretical and methodological developments in neuroscience and related disciplines, highlighting a deficit in fundamental theoretical frameworks capable of guiding and integrating empirical research. The article concludes by emphasizing the significance of the information paradigm and its derivative approaches for studying consciousness, mental processes, and genetic aspects of biological functions.
Greeting. David Izrailevich Dubrovsky, the renowned Russian philosopher, methodologist, Doctor of Philosophy, professor, Chief Editor of the Russian Journal of Philosophical Sciences, Chief Research Fellow of the Institute of Philosophy of the Russian Academy of Sciences, Professor of Lomonosov Moscow State University, Deputy Chairman of the Scientific Council on Methodology of Artificial Intelligence and Cognitive Research at the Presidium of the Russian Academy of Sciences, veteran of the Great Patriotic War, has turned 95 years old!
ISSN 2618-8961 (Online)