Go to main content

University of Silesia in Katowice

  • Polish
  • English
Research Excellence Initiative
Logo European City of Science 2024

Freedom of research | Philosophy of artificial intelligence

26.07.2024 - 16:01 update 16.08.2024 - 14:34
Editors: OO

RESEARCH EXCELLENCE INITIATIVE


FREEDOM OF RESEARCH – SCIENCE FOR THE FUTURE

‘Freedom of research – science for the future’ series consists of articles, interviews and short videos presenting research conducted by the winners of ‘Freedom of research’

Prof. Mariusz Wojewoda and Prof. Krzysztof Wieczorek

Philosophy of artificial intelligence

| Ewa Szwarczyńska, PhD |

Does man – as the creator of AI – deserve to be called the new creator? Can their creation become an equal partner for them, or maybe a competitor? Is this just a utopian vision of reality? These and other questions are answered during a conversation with Ewa Szwarczyńska, PhD by the winners of the 2nd edition of the’ Freedom of Research’ competition of the Research Excellence Initiative – Mariusz Wojewoda, PhD, DLitt, Assoc. Prof. and Prof. Krzysztof Wieczorek from the Institute of Philosophy of the University of Silesia.

Od lewej: prof. dr hab. Mariusz Wojewoda oraz prof. dr hab. Krzysztof Wieczorek
From the left: Prof. Mariusz Wojewoda and Prof. Krzysztof Wieczorek | Photo by Ewa Szwarczyńska, PhD

Ewa Szwarczyńska: Utopia, a digital paradise, a better world is one of the possible future scenarios due to the presence of AI in human life. Is there a techno-utopian vision of the future ahead of us?

Mariusz Wojewoda: If we think about the word “utopia”, the etymology indicates that it is a place that does not exist or a good place. People creating a vision of a better world is quite an old problem. Usually, when we talk about utopia, we think of Renaissance utopias, including the Utopia by Thomas More. Nowadays, this concept concerns the search for a better life for man. We introduce technology and various technical artifacts to improve the quality of our lives – to repair or improve the mind and/or body. To do this, we need: robots equipped with AI that will act as caregivers in old age or illness, interfaces connecting the brain with a computer, improving the functioning of the aging brain, or a smartphone application that will fulfil the function of a life companion. The question arises then: what will be the long-term effects?

We are to be members of society 5.0, taking advantage of the opportunities associated with the Internet of Things (IoT), virtual reality and fast access to the Internet from anywhere. Network resources will enable the creation of a knowledge society. The ways of learning will change, we will increasingly use education equipped with an “artificial” teacher. However, this requires the development of new competences. Will we then continue to trust human authorities, or will artificial intelligence become an authority for us? Will knowledge still entail free choice and independent cognition, or will it be specially prepared?

This is thinking in the category of designing a vision of the future. We can be optimistic about this and assume that we will be able to adapt to change, despite its dizzying pace. It is possible that our brains cannot keep up with this pace, although we are quite diverse in this respect.

Ewa Szwarczyńska: If we were to talk about the university as a place for collecting and producing knowledge, wouldn’t this be too narrow of an approach, which omits the concept of wisdom? Is it enough for us to collect information? Isn’t there more to it than that?

Krzysztof Wieczorek: I am an enthusiast of Stanisław Lem’s work. As a futurologist, he predicted many phenomena that are happening today. In the column titled ‘Intelligence, Reason, Wisdom’ he grades the competences of consciousness, wondering to what extent AI will be able to overcome subsequent thresholds – first the threshold of consciousness, then the transition from simple, task-oriented intelligence to reason and finally from reason to wisdom. If these considerations were applied to the university, we should consider to what extent we – as an institution involving many people – are able to exceed these thresholds. Are we just a kind of corporation of minds, used to reproduce certain intellectual procedures, or can we do something more? To pose problems, to set new perspectives, to answer questions that have an existential dimension?

Artificial intelligence is gradually becoming our partner and ally in the search for knowledge, but the question is whether it can also become our partner in the search for wisdom. Are we able to use our AI-enriched flirtation to not only protect humanistic values, but also to expand them and teach artificial consciousness to think in problem, humanistic and axiological categories? Or will we fail in this area once again and become an annex to the machines we built ourselves?

Ewa Szwarczyńska: The transition from “information” to “knowledge” seems to be more understandable than the transition from the level of knowledge to the level of wisdom. What is the qualitative difference between these degrees?

Mariusz Wojewoda: Wisdom is rather an added value. One cannot create a school where wisdom is taught, and this means that it happens incidentally – not because that is the intention. It is a rare element. We say that machines provide information and maybe they will overtake us in knowledge (or it is possible that we will somehow communicate with them and our knowledge will be better), but wisdom is still an exclusively human phenomenon. We don’t really expect a machine to be wise.

Krzysztof Wieczorek: When we talk about the relationship between knowledge and wisdom, it seems to me that while knowledge is pragmatic and serves to achieve specific goals useful for someone, wisdom goes beyond this level of utilitarianism towards transcendentals – truth, goodness, beauty. The ancients spoke of wisdom as a virtue that is directly related to values. Mainly to the highest values ​​which – as transcendentals – transcend categorical thinking. And we only know human wisdom, the characteristics of which include selflessness, kindness, sacrifice, generosity. These are some features that cannot be defined operationally, and yet we intuitively know what they mean. However, it is possible to achieve another type of wisdom, which is not human wisdom. Such a thought experiment was the novel Solaris by Stanisław Lem, the protagonist of which was an intelligent ocean of the titular planet to which Earthlings sent their research expedition. This ocean had its own wisdom that was incompatible with human wisdom. It is also possible that by launching deep, unsupervised machine learning processes, we will evolve into machine wisdom that will be unlike human wisdom. It will be an autonomous wisdom pursuing its own path to access the transcendentals, and it can be a fascinating adventure if we manage to understand each other.

Mariusz Wojewoda: However, for now this is only a possibility. We don’t have such technical capacity yet. The AI ​​we have is at the level of weak artificial intelligence. There are three levels to it – weak, strong and super intelligence. What we say about these wisdom aspects would be on the third level, while we are still on the first level. This may happen, but as of right now this is still utopia that we are talking about.

Ewa Szwarczyńska: Stephen Hawking stated that in the future computers will overtake people in terms of intelligence and it is important that their goals are consistent with human goals. In this context, the question arises about the place and importance of AI in the world of values. Can we consider that AI itself constitutes value, or on the contrary – is it a threat to values?

Krzysztof Wieczorek: Martin Heidegger in his essay ‘The Question Concerning Technology’ suggests that technology be considered from two different levels. Means, instruments and tools that are instrumental in nature and do not go beyond the scope of application to specific functions are different from the technology itself as an environment. The essence of technology is nothing technological. Heidegger’s essay begins with this rather provocative sentence, in which he claims that technology has become a new home, a new environment for human life, which has caused our lives, valuations, thinking and goals to change.

By using many different tools on a daily basis, such as a smartphone, computer network, instant messengers, etc., we have created an entire “network of multiverses”, alternative realities – virtual reality, as well as a hybrid reality, some of the elements of which are taken from the physics of the sensory world, some from the virtual world – and for us they form one common conglomerate. We live in a world that we have changed through our actions, and which is also changing us. In my opinion, the fact that today we live in this multiverse means that we also live in a different axiological space. We have different needs, different expectations, different hopes.

There is also an idea that each subsequent technological evolution is initially perceived as a threat, primarily due to the appearance of new technologies, new machines, new devices that will take away a person’s job and the person will be caught off guard, they won’t know what to do. But it turns out – when we trace the history of technological evolution – that people can cope in such a way that when they are pushed out of a certain sphere of activity, they immediately find a new one, and this new sphere sometimes turns out to be more interesting, more creative, more satisfying than what they did before. Once upon a time, a worker worked on an assembly line in a Ford factory and throughout his life he screwed the same type of nut to the same type screw using the same type of movement. Today our work is much more creative. Therefore, if AI starts to displace us from more professions (for example, there are forecasts that ChatGPT will destroy the profession of a journalist, because it will produce news and cope better in the world of instant messengers than humans), then we will find other employment, other interests and we will pursue other values.

Mariusz Wojewoda: If we are to tame AI, we must tame it axiologically, that is, provide it with the values ​​that are important to us so that it does not pose a threat to us. It may be that it will be useful for an organisation or institution, but I will be able to trust it only when I decide that it will not put the interest of the institution above my own good and will not instrumentalise me.

Ewa Szwarczyńska: The question is whether we can even use the concept of value at all in relation to AI, or is it just some kind of priority inherent in this creation?

Krzysztof Wieczorek: Value systems are very often a pretext for value conflict. If we construct, or try to move towards, an AI that would be universally kind to everyone, would it actually meet human expectations – not only those declared verbally, but also those that we actually live? It’s hard for me to imagine the enthusiasm for someone who would be equally kind to Kargul and Pawlak. If we are talking about kindness, let it be someone who is kind to me, but ruthless towards my enemies – as in the Old Testament. There is a certain danger here, which is that we ourselves are not entirely consistent in formulating our expectations and our goals. Perhaps we think that we want to build a kind of a friend to everyone, but what we actually think about is making someone who will hug us, but who will not be so kind to someone else. And this is a different goal and a different system of values.

Ewa Szwarczyńska: When we talk about the purposefulness of work carried out on AI, the question arises whether, apart from its pragmatic dimension, it can be seen as manifestations of the human need to seek the Other or the desire to have creative powers, in other words: to be like God? Who is the creator of AI? A seeker, a discoverer, or perhaps a creator? Which need plays the dominant role here – discovery or creation? Or maybe it’s just pure pragmatism?

Krzysztof Wieczorek: While pondering this problem, I discovered an excerpt from The Street of Crocodiles by Bruno Schulz. In the Treaty on Mannequins, the author puts forward the thesis that man has discovered a very deep-seated need to be the second creator. Schulz calls it the “second-generation demiurge”. Since we have learned to think of ourselves in this Christian paradigm – man as a work of God, created in the image and likeness of the Creator – we have discovered a longing to transfer this similarity that God has introduced in us, as if to the next generation of creatures for whom we will be gods. And if there is any element of truth in this (and I think there is, because if we look at a number of myths, legends, fairy tales told in various cultures, then this human longing for a certain agency, for metaphysical fatherhood, can actually be decoded in certain mythological or fairy-tale topoi), it would give a new dimension to our actions. Therefore, perhaps an additional force that encourages us to invest as much creativity as possible in newer forms of AI is the longing to replace the Creator to some extent and become a god for someone else. Not for ourselves anymore.

God was endowed with omnipotence by definition, but when creating humans – according to biblical stories – he gave up his omnipotence in favour of equipping them with freedom. In fact, this can be interpreted in terms of God’s greatest mistake. Friedrich Nietzsche or Emil Cioran claim that God’s greatest mistake was to provide man with free will, because in practice it meant the need for man to rebel against his Creator. And now the second-generation demiurge is trying, in a sense, to outsmart God, that is, not to make the same mistake twice. Therefore, he tries to equip his “imago homini”, his creation, with full availability and full controllability – so that it does not rebel. We design AI concepts that are to be fully dependent on us. Even if it makes decisions autonomously, we are putting great effort into implementing our values, which will be obligatory, into the AI ​​decision-making process.

In Lem’s para-novel Golem XIV, there is a thread in which machines, programming themselves in subsequent, more and more perfect generations, have been pre-programmed in such a way that they have certain unbreakable ethical norms built-in at the drive level (and not at the convertible level), which protect humans against a possible rebellion of machines. And here lies the fundamental difference – God has not equipped us with any such obligatory ethical norms that we cannot overcome. In fact, there is no insurmountable value for man and he can trample on any sacredness.

Ewa Szwarczyńska: Is philosophy able to defend the uniqueness of humanity in the face of the development of AI? Is it philosophy’s job?

Krzysztof Wieczorek: I think philosophy is deeply divided in this respect, as it is in many other aspects. There is a transhumanist trend in philosophy that quite strongly supports anthropological reductionism, and if we stayed within this trend, I think we would not be able to defend the uniqueness of man. Here I notice an eager readiness to share our own humanity with beings that will be created thanks to our technical agency, and at the same time, in some transhumanist manifestos, I also see an incomprehensible openness to breaking with cultural traditions of identifying humanity, precisely at the axiological level. Some philosophers are satisfied with a reductionist definition of man: as a biological organism subject to evolution, described using physicalistic language, and being nothing more than an animal highly specialised in performing certain functions.

However, if we remain on the basis of personalistic philosophy, we have at our disposal a language, a whole philosophical culture, which shows that a human being is an inimitable being and that any beings created by improving algorithms and zero-sum activities, attempts to create someone in the image and likeness of man, must stop at a certain boundary which he has crossed thanks to his personal essence.

Ewa Szwarczyńska: Research by Prof. Mariusz Wojewoda indicate that the topic of AI also affects the sphere of human processing. The issue of the “expanded mind” that you deal with is not just a metaphorical slogan. You state that these concepts raise concerns that “introducing technological products into our lives, or into our bodies and brains, will violate the essence of humanity”. What is this essence? Is it even possible to lose the essence of humanity?

Mariusz Wojewoda: “Humanity” is a word that we can use as a species term. However, in philosophy or ethics it is often used in a normative sense – as something that defines us, something that functions in association with human nature. If we adopt a more dynamic understanding of a person – that we become a person as a result of environmental change – then this change would make us different people after some time (because environmental influences change). The question arises whether I am still “me” – between the ages of 5 and 50 – or whether “I” am a different person due to environmental changes.

Krzysztof Wieczorek: It is also worth recalling Aristotle and the difference between accidental and substantial change. Aristotle uses a Greek myth that is related to Theseus – in Greek culture there was the so-called “paradox of the ship of Theseus”. This ship sailed every year between Crete and Piraeus to commemorate the day of the liberation of Crete from the tyranny of the Minotaur. From time to time, it had to be renovated and parts replaced. After some years have passed, all parts of this ship were replaced. Aristotle asks whether it is still the same ship. What determines the identity of an entity?

Ewa Szwarczyńska: The power of artificial intelligence lies in the ability to collect data, transform it into information and, at a later stage, into knowledge. Taking into account Francis Bacon’s statement that power is based on knowledge, there is concern about whether humans will be able to maintain an advantage over artificial intelligence. If threatened by AI, will humans be able to surpass its technical capabilities?

Krzysztof Wieczorek: I would start by asking how we understand the concept of power, because if it is purely technical – as the ability to effectively influence the state of affairs – then I think that power understood in this way “lies on the street” and is easy to reach for, having certain competences to influence reality. Autonomous AI may as well do it. However, if we understand power “in human terms”, then for me power is primarily about satisfying the need for meaning, recognition, attention, and I do not see any possibility that AI would be equipped with the kind of needs that would determine it to reach for power, to act revel in its own omnipotence.   So far, there are no indications that would let us believe that deprivations will appear within AI, such as the need for recognition, protection of one’s own dignity, or the feeling of superiority over others. These are typically human characteristics. Therefore, it seems to me that power understood as satisfying the need to dominate others will remain a completely human domain.

Mariusz Wojewoda: Bacon juxtaposes power with knowledge. There is a pattern of thinking that whoever has an advantage in intelligence will also have an advantage over those who are less intelligent. Since I am more intelligent than animals, I decide what they should do, not the other way around. To put it very simply, it is about the influence of “higher” organisms on “lower” ones. The extrapolation is that if AI is smarter, it will rule the world, or we will decide to let it rule the world because it is smarter and solves problems better than us.

To what extent will AI and wisdom be related to selflessness and to the the idea that those who are more intelligent are also more responsible? It is a matter of our expectations, because that is how it should generally be – those who have an intellectual advantage should feel more responsible for others.

Ewa Szwarczyńska: Thank you for this fascinating conversation.

Mariusz Wojewoda, Krzysztof Wieczorek: Thank you.

return to top