Perception, Logic and Plurality of Rational Representations of the World

The article covers such issues as the relevance of the theory of perception as a multi-level information processing, the methodological role of the concept of representation and the relation of neurodynamic structures to subjective experience. The author critically reviews the philosophical presumptions underlying the various concepts of “local rationality,” the core of which is constituted by the belief that large ethnic cultures generate or are based on their own rationality and their own logic. Three statements are successively considered: (a) thinking is based on the inherent rules of rationality, (b) logic is an extract of rationality, and (c) types of rationality are geographically consistent with large ethnocultural communities. Empirical arguments are presented that demonstrate equivocality (if not falsity) of the first two theses. In particular, firstly, it is shown that the genuine rationality of thinking lies not in following the rules that are immanent to it, but in the development of thinking and, more broadly, cognitive operations towards optimization of certain indicators of the cognitive or motor system that are important for survival and adaptation. In addition, this rationality is multivariate, and the choice between variants is often weakly determined or even random. Thus, the first statement turns out to be refuted. Secondly, by reference to the well-known experiments, it is shown that most people do not explicitly follow some declared logical rules in solving even logical or mathematical problems, and yet there is reason to consider their behavior rational. The third thesis, as shown with some limited empirical material, appears to be partially confirmed. Nevertheless, the demonstration of the doubtfulness of the first two theses makes the conclusion that different nations have different logics insufficiently substantiated.


Introduction
In the era of political correctness and the guilt complex on part of the former colonialist nations, there spread ideas about an epistemic equivalence between cultures of various ethnic communities and the currently dominant American -West European culture. It is hardly possible to seriously speak of "another science" in cultures that have never developed a scientific foundation of their own. But absence of philosophy or logic in the usual sense of these words can be much more easily interpreted as presence of a "different" philosophy or logic, whose rights should be considered. Russian authors write about this with varying degrees of conviction (e.g.: [Gerasimova 2016;Karelova 2018;Smirnov 2019]).
This article is not intended to provide any detailed assessment of such views. I just want to draw attention to the fact that, as a rule, authors make conclusions about the existence of a "different thinking" among non-Western ethnic cultures, based on linguistic or sociological facts in their most abstract interpretation, by pointing out peculiarities of the grammatical structure of the languages or the organization of public life. In other words, they substantiate their conclusions with means of exclusively philosophical reasoning, whereas scientific research is possible where it is founded on empirical and experimental methods. Using a limited number of examples, I hope to show that philosophy need not hasten to rescue where and when a specific science can ensure greater objectivity.
As a rule, theories that discuss the existence of special forms of thinking in other ethnic cultures are based on the following philosophical assumptions: 1. Thinking explicitly or implicitly obeys certain immanent (1) rules of rationality.
2. The core or essence of rationality is a certain logic. 3. The types of rationality geographically coincide with large ethnic cultural communities.
It is easy to notice that these premises add up to a certain syllogism, the conclusion of which can be phrased as the following proposition: 4. Consequently, large ethnocultural communities use different logics.
If we accept this idea, then a certain spatial structure develops, which I would call the "iceberg of local rationality."At the base of this iceberg, hidden beneath the deep Arctic water, is what can be called "cognitive intuitions" (term suggested by A.V. Smirnov [Smirnov 2019]). These are pre-rational predispositions of consciousness. Cognitive intuition, according to this concept, quite unequivocally determines the "type of rationality» -implicit or explicit rules of reasoning and decision-making in a given local culture. The construction is crowned with "logic," which consists in the rules of thinking, explicitly formulated in this culture. Considered from bottom up, these elements show unequivocal determination between these elements: cognitive intuitions fully determine the type of rationality, and, in turn, this type dictates a certain logic, then the possibility of retrodiction penetrates the structure from top to bottom: i.e., we can reconstruct and explicate the type of rationality in a culture according to the logic it declares and its cognitive intuitions, using the type of rationality identified. In order to verify the application of such constructs, let us try to deal with each of the premises of the syllogism.

Do we think rationally?
Understanding what kind of cognitive devices we humans are and how our devices operate will be clearer if we look at the science fiction plot described in the article [O'Regan & Noë 2001] (cited from [Loginov & Spiridonov 2017]). Let us imagine that a team of oceanographers works on board a ship and has equipped a remote-controlled bathyscaphe, to explore the remains of the sunken Titanic. However, a malicious and very smart sea monster has settled in the Titanic and immediately reconnected the cables coming from the bathyscaphe to the control panel, so that acoustic signals are now sent to the video monitors, and vice versa. The scientists are perplexed by the meaningless readings of their instruments, and all that remains for them is to look for regularities in incomprehensible flicker and hissing, compare them with one another and offer plausible hypotheses to account for these regularities.
The fact that the senses can deceive us was discovered by ancient Greek philosophers, proposing the Logos as a panacea or an alternative way of contemplating the truth. The subsequent history of philosophy, presenting as many different pictures of the world as there have been philosophers, shows that the mind deceives us, too. Therefore, the analysis and comparison of flows of unreliable information (empirical and rational) in order to identify patterns and generate plausible hypotheses is a modus operandi for both an individual and for science as such.
As modern studies on behavioral economics and perception show, the problems of rationality arise already at the level of perception. According to Teppo Felin et al., "Arguments about perception are inadvertently interwoven into the rationality literature through the use of visual illusions, metaphors, and tasks, as examples of bias, boundedness, and blindness." [Felin et al. 2017[Felin et al. , 1056. Cognitive science, as it were, rediscovers the arguments of Parmenides against "world in opinion." And if in studies of behavioral economics it is assumed that there exists some unique objective reality and some unique rationality corresponding to it, and illusions and prejudices of perception and judgment are treated as deviations, then, in my opinion, a philosophically correct approach tells us that there is no single "window on the world" that possesses epistemic supremacy. Therefore, when we lack information, we choose one of the possible rational interpretations of reality in the hope that it will allow us to survive and adapt.
It seems that Felin et al. claim to perform a kind of Kantian or Copernican revolution in science: "What we are arguing for is thus a fundamentally different view of cognition. Awareness and percep-tion are instead a function of the perceiver [rather than of perceived data], of the questions, probes, and theories that any of us impose on even the simplest of visual scenes or surroundings, or on reality more generally." [Felin et al. 2017[Felin et al. , 1054. Thus, the authors discuss the widely known "Invisible Gorilla" experiment [Simons, Chabris 1999]. In the video, there are two teams and their players are dressed, respectively, in white and black T-shirts. They simultaneously transmit volleyballs, each to the players of their team. The subjects under tests, that is the viewers, receive the task of calculating how many times the players in white shirts pass the ball. Since the task is simple, almost all give the correct answer. However, only half of the subjects notice that in the middle of the game a man comes out on the field dressed as a gorilla, beats himself on the chest, apelike, and leaves the stage. Also, none of the subjects pay attention to the fact that at the same moment one of the players on the black team leaves the game, and the background curtain changes color from red to golden (see video reconstruction of the experiment: [Simons 2010]).
From the point of view of a more traditional "realistic" approach, this effect illustrates inefficacy of our sense perception, which is unable to reflect true reality. However, as the authors of the cited article rightly point out, "genuine" reality consists of a potentially infinite number of properties and relationships (hair color and the racial composition of the players, their tempos of movement, letters on the curtain, etc.), and the total of such detail is beyond any perception other than divine (and in the latter case, is it even possible to speak of "perception"?). Therefore, the direction and limitations of real perception by individuals are manifestations of rationality, and not vice versa, of rationality which can be implemented in a variety of options.
From the same "Kantian-cognitive" (2) point of view, one can also consider other visual illusions known in science. Thus, the Ponzo illusion (Fig. 2), like the Ebbinghaus illusion (Fig. 3), demonstrates a "false" perception of the relative sizes of identical figures placed in a variable context. However, this perception turns out to be false if we have an alternative access to reality, or an alternative way of representing it. For example, a drawing ruler. In a culture where the ruler is the center of important practice, it looks more reasonable to check measurements of figures with it, and only in case of pragmatic doubts. In addition, at least in the case of the Ponzo illusion, they are "false" only to the extent that the figures are superimposed on the image, and are not on the depicted plane. If they were not mere white geometrical segments, but 3D wooden bars left on the railroad tracks, we would have no doubt about the difference in their sizes. And such a representation would undoubtedly be rational.
From the above, we can conclude that our perspective perception (when distant objects seem smaller) is a gradually formed, economical, and therefore rational, form of visual representation of complex spatial relationships, which is sufficient for our self-orientation and adaptation. Yes, it is not difficult to "deceive" our vision, but precisely due to this, nature uses the selection method to develop the most economical and energy-efficient ways of presenting data needed for statistically sufficient survival of individuals of certain species.
The comments given in the handbook on cross-cultural psychology [Shiraev, Levy 2016, 205-206] are of fundamental importance to our topic. According to cross-cultural studies, representatives of the "Western" culture and residents of large cities, in contrast to the inhabitants of the "third world" and rural areas, are most susceptible to illusions such as the Ponzo illusion. The authors list a number of hypotheses explaining this difference, which all somehow boil down to empirical learning. However, the recent revision of their book features a reference to Indian doctors who found a surgical way to restore vision in congenitally blind children. Surprisingly, the cured children, who were immediately shown a picture of Ponzo, and who had not had time for cultural learning, regularly found the upper segment larger. The authors indicate that there is no generally accepted explanation for such new data.
In my opinion, some light on possible universal explanations of visual illusions can be shed by the remark of David Marr, a recognized specialist in computational neuroscience, expressed in his major book Vision and regarding perhaps the oldest and most famous of visual illusions, the Necker cube (Fig. 4). This image is characterized by the fact that the viewer can freely "switch" between two possible aspects: in one of them, the facet closest to the viewer is seen as located below, and in the other as located above the farther one. Similarly to the Ponzo illusion, the illusion appears in a two-dimensional projection of a three-dimensional picture. Therefore, we can assume that this is somehow connected with the features of processing three-dimensional images and using our respective processing centers. Marr suggested that "part of the explanation of its [cube's] perceptual reversal must have something to do with a bistable neural network (that is, one with two distinct stable states) somewhere inside the brain…" [Marr 2010, 25-26]. In other words, the architecture of numerous interconnected and mutually subordinated neural networks that make up the brain suggests the presence of many equally stable states, in which these networks can become fixed after receiving data of a certain kind. Obtaining additional data can move the representation from one stable state to another, changing, for example, the visible aspect of the image. But the system as a whole is tuned in to operate even with insufficient data, when one of the possible and, in principle, equally rational stable states is randomly chosen, or the choice is determined by education, previous experience, cultural context, or physiology.

Fig. 4. Necker cube
Felin et al. agree with this interpretation, believing that "the human susceptibility to priming and sensitivity to salient cues is not prima facie evidence of irrationality, but rather provides evidence of this [cognitive] multistability. Whether we are dealing with perception or reasoning, in information-deprived and ambiguous situations humans use whatever evidence or cues (or demand characteristics) are available to make judgments" [Felin et al. 2017[Felin et al. , 1055.
Thus, comparing the psychological data and the proposed psychological and neurophysiological interpretations, we can conclude, with a certain probability, that the structure of the cognitive apparatus is multistable and therefore creates the possibility of multiple "rational" perceptions and judgments. No one is doomed to a single perceptual style or specific visual choices in case of data ambiguity, although previous experience or cultural context may have some influence, increasing the likelihood of choosing one or another form of representation in preference to others. But the main conclusion is that rationality in the choice of perceptions, thoughts or decisions is fundamentally variable and relative.

Is thinking logical?
A very widespread and uncritically accepted belief is that logic is an immanent form of thinking and at the same time its descriptive (and not normative) theory, since in our thinking we follow these or those immutable rules. And if such rules differ, then we are dealing with different logics. To verify this, we need to show that actual human thinking obeys the logical rules declared in a specific culture.
But let us recollect well-known experiments. In his experiment of 1968, Wason gave his subjects packs of cards with letters on one side and numbers on the other. The subjects were given a rule: if there is a vowel letter on one side of the card, then the other side will certainly feature an even number. The subjects were offered four cards to pick from, on the "upper" sides of which there were symbols, respectively: A, K, 2, and 7. The task was to answer the question: which cards should be turned over to make sure that the rule is observed? According to the logical rules of modus ponens and modus tollens, you need to turn over "A" card and "7" card. However, the distribution of real answers revealed by the subjects was as follows: "A" card -89%, "K" card -16%, "2" card -62%, and "7" card -25%. It is obvious that for the most part people were guided not by the rules of logic declared in their culture, but by some other considerations (description of the experiment from [Schooler 2001, 12771]).
In the Gigerenzer experiment, the subjects were offered the following conditions: suppose that 0.3% of the people have colon cancer. The probability that a colon cancer test will detect the disease is 50%, and the probability that it will erroneously show cancer in a healthy person is 3%. The question was: What is the probability that a person with a positive test has cancer? According to the Bayes rule, the correct answer is 4.8%. Note that Gigerenzer interviewed professional doctors whom you can hardly suspect of lack of experience or training. Nevertheless, the median response was 47%, almost 10 times higher than the mathematically correct one [Gigerenzer 1998, 17].
These experiments provoked intense discussion in literature and generated a number of explanatory hypotheses (e.g.: [Schooler 2001, 12772-12775]). So, L. Schooler as a reviewer believes that "performances on Wason (1968) card tasks, Gigerenzer's (1998) statistical tasks, and other experiments demonstrate that people are, in fact, irrational, when measured against accepted standard inferential rules [in their cultures]." [Schooler 2001, 12772]. Tversky and Kahneman believe that in these experiments, the people transfer heuristic and empirical rules that are successful in everyday life onto laboratory conditions where they are not applicable. In these interpretations, one can see the conviction that any deviation from the only possible rationality is irrational -the judgment strongly debated by Felin and his co-authors.
On the contrary, John Anderson, a classic of cognitive science in its classical (i.e., symbolic) embodiment, believed that the critical step in what might be called "rational analysis" was to find out which indicator the cognitive system could optimize, in order to make predictions about how people would behave doing specific experimental tasks [Anderson 1990, 28]. As if concretizing this idea, Oakesford and Chater emphasize that the subjects do not consider the task as a logical test, but rather try to establish a causal relationship between the two events [Oaksford & Chater 1996]. Having no pertaining experience with cards, people interpret Wason's rule as a cause-and-effect relationship, similar to the ones they encountered in their past experience. In their opinion, in Wason's problem we face a situation in which behavior is irrational with respect to the laws of deduction, but can be quite rational in the context of how people usually seek information.
I would point out the fact that a causal relationship is associative rather than logical, which is important for understanding the nature of rationality. According to Bender and Beller, current studies still discuss "a modified version [of Lévy-Bruhl's theory], according to which two modes of thinking are distinguished -sometimes termed as rule-based vs. associative, reflective vs. intuitive, or abstract vs. content-specific -yet assumed to co-exist in all cultures [emphasis added], are still debated in research on thinking and reasoning, albeit controversially" [Bender & Beller 2011, 2].
And a fairly important clarification was made by Gigerenzer himself. If the doctors tested were offered the same task not in percentage statistics, but in simple arithmetic terms (30 out of every 10,000 people have colon cancer; 15 out of these 30 will show as positive; at least 300 of the remaining 9,970 people without cancer will show a positive test result), then 67% of the subjects answered correctly, compared with 4% in the experiment when the data were presented according to the rules of the probability theory. From this fact, we can conclude that a successful solution to the problem depends not only on the content and complexity of the data, but also on the conformity of the form of its representation to everyday practices in information search [Gigerenzer 1998, 17].
As I believe, the discussions presented in this section make the following generalization quite reasonable. Initially, we humans, like other animals and not logical machines, are associative and statistical devices, since we have "on board" not a serial but a parallel (neural network) computer. We have to follow the deductive rules of science, law and other social institutions that carry out and regulate social computing. Logic is an essential product of these institutions, yet it tells us very little about actual human thinking.

Do ethnic cultures think differently?
So, what remains for us to find out is whether there is any reason to talk about sustainable types of rationality that are widespread in significant cultural or ethnic cultural spaces, which would be determined by steadily fixed cognitive intuitions common to whole nations and even "civilizations." There are many studies conducted in this field, in various parts of the world, with different goals and based on various methodological approaches. As a rule, such comparative studies are organized in social environments of compact cohabitation of representatives of different ethnic cultures. In the current brief discussion article, it is impossible to provide a comprehensive review of these materials. Some evidence, albeit brief and fragmentary, can be obtained from the book by Shiraev and Levi [Shiraev & Levi 2016 and other editions]. I will confine myself to an example of a study of Kazakh youths on the issue of distribution of perceptual and mental styles, as described in [Zhumagaliyeva & Barabanova 2014].
The authors of the study worked with young people who were divided into three groups: Kazakhs whose native language is Kazakh (KZKZ), Kazakhs whose native language is Russian (KZRU), and Russians with Russian as their native language (RU). Further, each group was also divided by gender. The field dependence was chosen as the main indicator for measuring the perceptual style, which is standardly defined as "dominance of the whole and insufficient differentiation of the parts in the image of perception, inability to overcome the context, non-isolation of individual stimuli from the background," while its opposite, field independence, is understood as "the ability to resist influence of conflicting background signs in the perception of visual forms and connections, the ability to perceive the whole, isolate incentives from the context" [Kondrashikhina 2009, 48]. According to the researchers themselves, the data obtained "do not allow us to speak of any significant influence patterns of ethnic and linguistic features on the cognitive style of the individual or vice versa" [Zhumagaliyeva & Barabanova 2014, 767].
A somewhat different picture is observed when the same sample is distributed across various styles of thinking. The following types were identified: analysts, synthesizers, idealists, realists, and pragmatists. In the group of Kazakh-speaking youths, only analysts and synthesizers were identified among the young men, nearly equally distributed among the three groups. In the group of Kazakh-speaking girls, only realists are missing. Distribution of the other types is approximately equal. In the KZRU and RU groups, half of the young men are analysts, and the others either evenly represent idealists and realists (KZRU), or only realists (RU). The girls in both Russianspeaking groups are distributed more or less evenly, except for the fact that pragmatists are missing in KZRU, and synthesizers are not found in RU. The authors of the study conclude that a selective comparison without gender differentiation "says that KZRU and RU groups are more similar to each other than [to] KZKZ group" [Zhumagaliyeva & Barabanova 2014, 768].
Of course, the data gathered are insufficient to conclude whether or not the nature or structure of the language affect the cognitive styles of its native speakers. But they show that, in some cases, there is a weak correlation, and therefore, such an effect is possible. Some other cognitive studies confirm the influence of language and social institutions on the style and nature of an individual's cognitions. As the above authors believe, "human interaction in social, institutional, and organizational settings is likely to significantly shape how rationality 'aggregates.' This is certain to be far more complicated than simple, linear addition, given complex, emergent outcomes" [Felin et al. 2017[Felin et al. , 1055.
Thus, the relative dependence of mental styles of ethnic groups on their linguistic identity, combined with nearly complete similarity of their perceptual styles, suggests the determining role of language in the formation of cognitive portraits of ethnic cultures.

Conclusion
We started with the philosophical assumptions that underlie the belief in sustainable types of rationality, meaningfully and geographically correlated to the cognitive styles of ethnocultural communities. Three of these assumptions form the premises of the syllogism, and the fourth forms a conclusion. As a result of a brief review of empirical data, we found that, firstly, the true rationality of thinking does not consist in following the rules immanent to it, but in thinking and, wider, cognitive operations being focused on optimizing indicators of the cognitive or motor system, which are important for survival and adaptation. In addition, this rationality is a multivariate one, and the choice between options is often loosely determined or even random. Thus, the first premise of the syllogism is refuted. Secondly, the experiments described in literature show that most people do not follow explicitly declared logical rules, even when solving logical or mathematical problems, and yet their behavior can be considered rational, given that they employ familiar and economical ways of finding information and establishing causal relationships. Thus, the second premise of the syllogism is also refuted.
On the contrary, the third premise has found partial confirmation. There are reasons to believe that the cognitive styles of large groups of people are determined by their ethnic and cultural characteristics, such as their native language.
But, nevertheless, since the conclusion of the syllogism can be considered undoubtedly true only on condition that all its premises are true, we must state that we have no sufficient reason to believe that different ethnocultural communities use different logics in their thinking.

NOTES
(1) In this case, I mean immanent rules are those that are revealed to thinking itself. That is, such rules of which thinking itself can be aware.