{"id":3274,"date":"2024-07-26T16:01:06","date_gmt":"2024-07-26T14:01:06","guid":{"rendered":"https:\/\/us.edu.pl\/idb\/?p=3274"},"modified":"2024-08-16T14:34:18","modified_gmt":"2024-08-16T12:34:18","slug":"swoboda-badan-filozofia-sztucznej-inteligencji","status":"publish","type":"post","link":"https:\/\/us.edu.pl\/idb\/en\/swoboda-badan-filozofia-sztucznej-inteligencji\/","title":{"rendered":"Freedom of research | Philosophy of artificial intelligence"},"content":{"rendered":"<p>[vc_row][vc_column][vc_empty_space][\/vc_column][\/vc_row][vc_row][vc_column width=&#8221;1\/3&#8243; css=&#8221;.vc_custom_1620308023905{background-color: #eaeaea !important;}&#8221;]\r\n                <div class=\"text-modules\">\r\n                    <div class=\"container\">\r\n                        \r\n                        <div class=\"text-modules__content\"><\/p>\n<p style=\"text-align: right;\"><strong><small><span style=\"letter-spacing: 0.6mm; color: #011535; font-size: 120%; font-family: PT Sans Narrow; font-weight: bolder;\">RESEARCH EXCELLENCE INITIATIVE<\/span><\/small><\/strong><\/p>\n<hr \/>\n<p style=\"text-align: right;\"><strong><small><span style=\"letter-spacing: 0.4mm; color: #9b132a; font-size: 130%; font-family: PT Sans Narrow;\">FREEDOM OF RESEARCH \u2013 SCIENCE FOR THE FUTURE<\/span><\/small><\/strong><\/p>\n<p>\n<\/div>\r\n                    <\/div>\r\n                <\/div><div class=\"container\"><div class=\"separator\" style=\"background-color: #002E5A\"><\/div><\/div>\r\n                <div class=\"text-modules\">\r\n                    <div class=\"container\">\r\n                        \r\n                        <div class=\"text-modules__content\"><\/p>\n<p style=\"font-size: 110%; color: #011535; font-family: 'PT Sans Narrow'; text-align: right;\">\u2018Freedom of research \u2013 science for the future\u2019 series consists of articles, interviews and short videos presenting research conducted by the winners of \u2018Freedom of research\u2019<\/p>\n<p>\n<\/div>\r\n                    <\/div>\r\n                <\/div>[vc_btn title=&#8221;\u2018FREEDOM OF RESEARCH \u2013 SCIENCE FOR THE FUTURE\u2019 SERIES&#8221; style=&#8221;classic&#8221; shape=&#8221;square&#8221; color=&#8221;blue&#8221; size=&#8221;sm&#8221; align=&#8221;right&#8221; css=&#8221;.vc_custom_1723811463414{margin-top: 4px !important;margin-right: 0px !important;border-top-width: 0px !important;border-right-width: 0px !important;padding-top: 0px !important;padding-right: 0px !important;}&#8221; link=&#8221;url:https%3A%2F%2Fus.edu.pl%2Finicjatywadoskonalosci%2Fswoboda-badan-nauka-dla-przyszlosci%2F|||&#8221;][\/vc_column][vc_column width=&#8221;2\/3&#8243;][vc_column_text css=&#8221;.vc_custom_1723811383924{margin-bottom: 0px !important;border-bottom-width: 0px !important;padding-bottom: 0px !important;padding-left: 10px !important;}&#8221;]<\/p>\n<p style=\"font-size: 200%; font-family: PT Sans Narrow; color: #002e5a;\"><strong>Prof. Mariusz Wojewoda and Prof. Krzysztof Wieczorek<\/strong><\/p>\n<p>[\/vc_column_text][vc_row_inner css=&#8221;.vc_custom_1620304473772{margin-top: 0px !important;border-top-width: 0px !important;padding-top: 0px !important;}&#8221;][vc_column_inner width=&#8221;2\/3&#8243;][vc_empty_space height=&#8221;2px&#8221; css=&#8221;.vc_custom_1620304425731{background-color: #9b132a !important;}&#8221;][\/vc_column_inner][vc_column_inner width=&#8221;1\/3&#8243;][\/vc_column_inner][\/vc_row_inner]\r\n                <div class=\"text-modules\">\r\n                    <div class=\"container\">\r\n                        \r\n                        <div class=\"text-modules__content\"><\/p>\n<h3 style=\"font-size: 120%; font-family: PT Sans Narrow; color: #002e5a;\">Philosophy of artificial intelligence<\/h3>\n<p>\n<\/div>\r\n                    <\/div>\r\n                <\/div>\r\n                <div class=\"text-modules\">\r\n                    <div class=\"container\">\r\n                        \r\n                        <div class=\"text-modules__content\"><p><span style=\"font-size: 10pt;\">| Ewa Szwarczy\u0144ska, PhD |<\/span><\/p>\n<p>Does man \u2013 as the creator of AI \u2013 deserve to be called the new creator? Can their creation become an equal partner for them, or maybe a competitor? Is this just a utopian vision of reality? These and other questions are answered during a conversation with Ewa Szwarczy\u0144ska, PhD by the winners of the 2nd edition of the\u2019 Freedom of Research\u2019 competition of the Research Excellence Initiative \u2013 Mariusz Wojewoda, PhD, DLitt, Assoc. Prof. and Prof. Krzysztof Wieczorek from the Institute of Philosophy of the University of Silesia.<\/p>\n<\/div>\r\n                    <\/div>\r\n                <\/div>\r\n                <div class=\"text-modules\">\r\n                    <div class=\"container\">\r\n                        \r\n                        <div class=\"text-modules__content\"><\/p>\n<figure id=\"attachment_3276\" aria-describedby=\"caption-attachment-3276\" style=\"width: 400px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" class=\"size-medium wp-image-3276\" src=\"http:\/\/us.edu.pl\/idb\/wp-content\/uploads\/sites\/72\/fotografie\/swoboda-badan-II\/prof-Wojewoda-i-prof-Wieczorek-400x600.jpg\" alt=\"Od lewej: prof. dr hab. Mariusz Wojewoda oraz prof. dr hab. Krzysztof Wieczorek\" width=\"400\" height=\"600\" srcset=\"https:\/\/us.edu.pl\/idb\/wp-content\/uploads\/sites\/72\/fotografie\/swoboda-badan-II\/prof-Wojewoda-i-prof-Wieczorek-400x600.jpg 400w, https:\/\/us.edu.pl\/idb\/wp-content\/uploads\/sites\/72\/fotografie\/swoboda-badan-II\/prof-Wojewoda-i-prof-Wieczorek-384x575.jpg 384w, https:\/\/us.edu.pl\/idb\/wp-content\/uploads\/sites\/72\/fotografie\/swoboda-badan-II\/prof-Wojewoda-i-prof-Wieczorek.jpg 543w\" sizes=\"(max-width: 400px) 100vw, 400px\" \/><figcaption id=\"caption-attachment-3276\" class=\"wp-caption-text\"><span style=\"font-size: 10pt;\">From the left: Prof. Mariusz Wojewoda and Prof. Krzysztof Wieczorek | Photo by Ewa Szwarczy\u0144ska, PhD<\/span><\/figcaption><\/figure>\n<p>\n<\/div>\r\n                    <\/div>\r\n                <\/div>\r\n                <div class=\"text-modules\">\r\n                    <div class=\"container\">\r\n                        \r\n                        <div class=\"text-modules__content\"><p><strong>Ewa Szwarczy\u0144ska: Utopia, a digital paradise, a better world is one of the possible future scenarios due to the presence of AI in human life.\u00a0Is there a techno-utopian vision of the future ahead of us?<\/strong><\/p>\n<p><strong>Mariusz Wojewoda:<\/strong>\u00a0If we think about the word \u201cutopia\u201d, the etymology indicates that it is a place that does not exist or a good place. People creating a vision of a better world is quite an old problem. Usually, when we talk about utopia, we think of Renaissance utopias, including the <em>Utopia<\/em> by Thomas More. Nowadays, this concept concerns the search for a better life for man. We introduce technology and various technical artifacts to improve the quality of our lives \u2013 to repair or improve the mind and\/or body. To do this, we need: robots equipped with AI that will act as caregivers in old age or illness, interfaces connecting the brain with a computer, improving the functioning of the aging brain, or a smartphone application that will fulfil the function of a life companion. The question arises then: what will be the long-term effects?<\/p>\n<p>We are to be members of society 5.0, taking advantage of the opportunities associated with the Internet of Things (IoT), virtual reality and fast access to the Internet from anywhere. Network resources will enable the creation of a knowledge society. The ways of learning will change, we will increasingly use education equipped with an \u201cartificial\u201d teacher. However, this requires the development of new competences. Will we then continue to trust human authorities, or will artificial intelligence become an authority for us? Will knowledge still entail free choice and independent cognition, or will it be specially prepared?<\/p>\n<p>This is thinking in the category of designing a vision of the future. We can be optimistic about this and assume that we will be able to adapt to change, despite its dizzying pace. It is possible that our brains cannot keep up with this pace, although we are quite diverse in this respect.<\/p>\n<p><strong>Ewa Szwarczy\u0144ska: If we were to talk about the university as a place for collecting and producing knowledge, wouldn&#8217;t this be too narrow of an approach, which omits the concept of wisdom? Is it enough for us to collect information? Isn&#8217;t there more to it than that?<\/strong><\/p>\n<p><strong>Krzysztof Wieczorek:<\/strong>\u00a0I am an enthusiast of Stanis\u0142aw Lem\u2019s work. As a futurologist, he predicted many phenomena that are happening today. In the column titled \u2018Intelligence, Reason, Wisdom\u2019 he grades the competences of consciousness, wondering to what extent AI will be able to overcome subsequent thresholds \u2013 first the threshold of consciousness, then the transition from simple, task-oriented intelligence to reason and finally from reason to wisdom. If these considerations were applied to the university, we should consider to what extent we \u2013 as an institution involving many people \u2013 are able to exceed these thresholds. Are we just a kind of corporation of minds, used to reproduce certain intellectual procedures, or can we do something more? To pose problems, to set new perspectives, to answer questions that have an existential dimension?<\/p>\n<p>Artificial intelligence is gradually becoming our partner and ally in the search for knowledge, but the question is whether it can also become our partner in the search for wisdom. Are we able to use our AI-enriched flirtation to not only protect humanistic values, but also to expand them and teach artificial consciousness to think in problem, humanistic and axiological categories? Or will we fail in this area once again and become an annex to the machines we built ourselves?<\/p>\n<p><strong>Ewa Szwarczy\u0144ska: The transition from \u201cinformation\u201d to \u201cknowledge\u201d seems to be more understandable than the transition from the level of knowledge to the level of wisdom. What is the qualitative difference between these degrees?<\/strong><\/p>\n<p><strong>Mariusz Wojewoda:<\/strong>\u00a0Wisdom is rather an added value. One cannot create a school where wisdom is taught, and this means that it happens incidentally \u2013 not because that is the intention. It is a rare element. We say that machines provide information and maybe they will overtake us in knowledge (or it is possible that we will somehow communicate with them and our knowledge will be better), but wisdom is still an exclusively human phenomenon. We don\u2019t really expect a machine to be wise.<\/p>\n<p><strong>Krzysztof Wieczorek:\u00a0<\/strong>When we talk about the relationship between knowledge and wisdom, it seems to me that while knowledge is pragmatic and serves to achieve specific goals useful for someone, wisdom goes beyond this level of utilitarianism towards transcendentals \u2013 truth, goodness, beauty. The ancients spoke of wisdom as a virtue that is directly related to values. Mainly to the highest values \u200b\u200bwhich \u2013 as transcendentals \u2013 transcend categorical thinking. And we only know human wisdom, the characteristics of which include selflessness, kindness, sacrifice, generosity. These are some features that cannot be defined operationally, and yet we intuitively know what they mean. However, it is possible to achieve another type of wisdom, which is not human wisdom. Such a thought experiment was the novel <em>Solari<\/em>s by Stanis\u0142aw Lem, the protagonist of which was an intelligent ocean of the titular planet to which Earthlings sent their research expedition. This ocean had its own wisdom that was incompatible with human wisdom. It is also possible that by launching deep, unsupervised machine learning processes, we will evolve into machine wisdom that will be unlike human wisdom. It will be an autonomous wisdom pursuing its own path to access the transcendentals, and it can be a fascinating adventure if we manage to understand each other.<\/p>\n<p><strong>Mariusz Wojewoda:<\/strong>\u00a0However, for now this is only a possibility. We don\u2019t have such technical capacity yet. The AI \u200b\u200bwe have is at the level of weak artificial intelligence. There are three levels to it \u2013 weak, strong and super intelligence. What we say about these wisdom aspects would be on the third level, while we are still on the first level. This may happen, but as of right now this is still utopia that we are talking about.<\/p>\n<p><strong>Ewa Szwarczy\u0144ska:\u00a0Stephen Hawking stated that in the future computers will overtake people in terms of intelligence and it is important that their goals are consistent with human goals. In this context, the question arises about the place and importance of AI in the world of values. Can we consider that AI itself constitutes value, or on the contrary \u2013 is it a threat to values?<\/strong><\/p>\n<p><strong>Krzysztof Wieczorek:\u00a0<\/strong>Martin Heidegger in his essay \u2018The Question Concerning Technology\u2019 suggests that technology be considered from two different levels. Means, instruments and tools that are instrumental in nature and do not go beyond the scope of application to specific functions are different from the technology itself as an environment. The essence of technology is nothing technological. Heidegger\u2019s essay begins with this rather provocative sentence, in which he claims that technology has become a new home, a new environment for human life, which has caused our lives, valuations, thinking and goals to change.<\/p>\n<p>By using many different tools on a daily basis, such as a smartphone, computer network, instant messengers, etc., we have created an entire \u201cnetwork of multiverses\u201d, alternative realities \u2013 virtual reality, as well as a hybrid reality, some of the elements of which are taken from the physics of the sensory world, some from the virtual world \u2013 and for us they form one common conglomerate. We live in a world that we have changed through our actions, and which is also changing us. In my opinion, the fact that today we live in this multiverse means that we also live in a different axiological space. We have different needs, different expectations, different hopes.<\/p>\n<p>There is also an idea that each subsequent technological evolution is initially perceived as a threat, primarily due to the appearance of new technologies, new machines, new devices that will take away a person\u2019s job and the person will be caught off guard, they won&#8217;t know what to do. But it turns out \u2013 when we trace the history of technological evolution \u2013 that people can cope in such a way that when they are pushed out of a certain sphere of activity, they immediately find a new one, and this new sphere sometimes turns out to be more interesting, more creative, more satisfying than what they did before. Once upon a time, a worker worked on an assembly line in a Ford factory and throughout his life he screwed the same type of nut to the same type screw using the same type of movement. Today our work is much more creative. Therefore, if AI starts to displace us from more professions (for example, there are forecasts that ChatGPT will destroy the profession of a journalist, because it will produce news and cope better in the world of instant messengers than humans), then we will find other employment, other interests and we will pursue other values.<\/p>\n<p><strong>Mariusz Wojewoda:<\/strong>\u00a0If we are to tame AI, we must tame it axiologically, that is, provide it with the values \u200b\u200bthat are important to us so that it does not pose a threat to us. It may be that it will be useful for an organisation or institution, but I will be able to trust it only when I decide that it will not put the interest of the institution above my own good and will not instrumentalise me.<\/p>\n<p><strong>Ewa Szwarczy\u0144ska: The question is whether we can even use the concept of value at all in relation to AI, or is it just some kind of priority inherent in this creation?<\/strong><\/p>\n<p><strong>Krzysztof Wieczorek:<\/strong>\u00a0Value systems are very often a pretext for value conflict. If we construct, or try to move towards, an AI that would be universally kind to everyone, would it actually meet human expectations \u2013 not only those declared verbally, but also those that we actually live? It\u2019s hard for me to imagine the enthusiasm for someone who would be equally kind to Kargul and Pawlak. If we are talking about kindness, let it be someone who is kind to me, but ruthless towards my enemies \u2013 as in the Old Testament. There is a certain danger here, which is that we ourselves are not entirely consistent in formulating our expectations and our goals. Perhaps we think that we want to build a kind of a friend to everyone, but what we actually think about is making someone who will hug us, but who will not be so kind to someone else. And this is a different goal and a different system of values.<\/p>\n<p><strong>Ewa Szwarczy\u0144ska: When we talk about the purposefulness of work carried out on AI, the question arises whether, apart from its pragmatic dimension, it can be seen as manifestations of the human need to seek the Other or the desire to have creative powers, in other words: to be like God? Who is the creator of AI? A seeker, a discoverer, or perhaps a creator? Which need plays the dominant role here \u2013 discovery or creation? Or maybe it\u2019s just pure pragmatism?<\/strong><\/p>\n<p><strong>Krzysztof Wieczorek:<\/strong>\u00a0While pondering this problem, I discovered an excerpt from <em>The Street of Crocodiles<\/em> by Bruno Schulz. In the <em>Treaty on Mannequin<\/em>s, the author puts forward the thesis that man has discovered a very deep-seated need to be the second creator. Schulz calls it the \u201csecond-generation demiurge\u201d. Since we have learned to think of ourselves in this Christian paradigm \u2013 man as a work of God, created in the image and likeness of the Creator \u2013 we have discovered a longing to transfer this similarity that God has introduced in us, as if to the next generation of creatures for whom we will be gods. And if there is any element of truth in this (and I think there is, because if we look at a number of myths, legends, fairy tales told in various cultures, then this human longing for a certain agency, for metaphysical fatherhood, can actually be decoded in certain mythological or fairy-tale topoi), it would give a new dimension to our actions. Therefore, perhaps an additional force that encourages us to invest as much creativity as possible in newer forms of AI is the longing to replace the Creator to some extent and become a god for someone else. Not for ourselves anymore.<\/p>\n<p>God was endowed with omnipotence by definition, but when creating humans \u2013 according to biblical stories \u2013 he gave up his omnipotence in favour of equipping them with freedom. In fact, this can be interpreted in terms of God\u2019s greatest mistake. Friedrich Nietzsche or Emil Cioran claim that God\u2019s greatest mistake was to provide man with free will, because in practice it meant the need for man to rebel against his Creator. And now the second-generation demiurge is trying, in a sense, to outsmart God, that is, not to make the same mistake twice. Therefore, he tries to equip his \u201cimago homini\u201d, his creation, with full availability and full controllability \u2013 so that it does not rebel. We design AI concepts that are to be fully dependent on us. Even if it makes decisions autonomously, we are putting great effort into implementing our values, which will be obligatory, into the AI \u200b\u200bdecision-making process.<\/p>\n<p>In Lem\u2019s para-novel <em>Golem XIV<\/em>, there is a thread in which machines, programming themselves in subsequent, more and more perfect generations, have been pre-programmed in such a way that they have certain unbreakable ethical norms built-in at the drive level (and not at the convertible level), which protect humans against a possible rebellion of machines. And here lies the fundamental difference \u2013 God has not equipped us with any such obligatory ethical norms that we cannot overcome. In fact, there is no insurmountable value for man and he can trample on any sacredness.<\/p>\n<p><strong>Ewa Szwarczy\u0144ska:\u00a0Is philosophy able to defend the uniqueness of humanity in the face of the development of AI? Is it philosophy\u2019s job?<\/strong><\/p>\n<p><strong>Krzysztof Wieczorek:<\/strong>\u00a0I think philosophy is deeply divided in this respect, as it is in many other aspects. There is a transhumanist trend in philosophy that quite strongly supports anthropological reductionism, and if we stayed within this trend, I think we would not be able to defend the uniqueness of man. Here I notice an eager readiness to share our own humanity with beings that will be created thanks to our technical agency, and at the same time, in some transhumanist manifestos, I also see an incomprehensible openness to breaking with cultural traditions of identifying humanity, precisely at the axiological level. Some philosophers are satisfied with a reductionist definition of man: as a biological organism subject to evolution, described using physicalistic language, and being nothing more than an animal highly specialised in performing certain functions.<\/p>\n<p>However, if we remain on the basis of personalistic philosophy, we have at our disposal a language, a whole philosophical culture, which shows that a human being is an inimitable being and that any beings created by improving algorithms and zero-sum activities, attempts to create someone in the image and likeness of man, must stop at a certain boundary which he has crossed thanks to his personal essence.<\/p>\n<p><strong>Ewa Szwarczy\u0144ska: Research by Prof. Mariusz Wojewoda indicate that the topic of AI also affects the sphere of human processing.\u00a0The issue of the \u201cexpanded mind\u201d that you deal with is not just a metaphorical slogan. You state that these concepts raise concerns that \u201cintroducing technological products into our lives, or into our bodies and brains, will violate the essence of humanity\u201d. What is this essence? Is it even possible to lose the essence of humanity?<\/strong><\/p>\n<p><strong>Mariusz Wojewoda:<\/strong>\u00a0\u201cHumanity\u201d is a word that we can use as a species term. However, in philosophy or ethics it is often used in a normative sense \u2013 as something that defines us, something that functions in association with human nature. If we adopt a more dynamic understanding of a person \u2013 that we become a person as a result of environmental change \u2013 then this change would make us different people after some time (because environmental influences change). The question arises whether I am still \u201cme\u201d \u2013 between the ages of 5 and 50 \u2013 or whether \u201cI\u201d am a different person due to environmental changes.<\/p>\n<p><strong>Krzysztof Wieczorek:<\/strong>\u00a0It is also worth recalling Aristotle and the difference between accidental and substantial change. Aristotle uses a Greek myth that is related to Theseus \u2013 in Greek culture there was the so-called \u201cparadox of the ship of Theseus\u201d. This ship sailed every year between Crete and Piraeus to commemorate the day of the liberation of Crete from the tyranny of the Minotaur. From time to time, it had to be renovated and parts replaced. After some years have passed, all parts of this ship were replaced. Aristotle asks whether it is still the same ship. What determines the identity of an entity?<\/p>\n<p><strong>Ewa Szwarczy\u0144ska: The power of artificial intelligence lies in the ability to collect data, transform it into information and, at a later stage, into knowledge. Taking into account Francis Bacon\u2019s statement that power is based on knowledge, there is concern about whether humans will be able to maintain an advantage over artificial intelligence. If threatened by AI, will humans be able to surpass its technical capabilities?<\/strong><\/p>\n<p><strong>Krzysztof Wieczorek:<\/strong>\u00a0I would start by asking how we understand the concept of power, because if it is purely technical \u2013 as the ability to effectively influence the state of affairs \u2013 then I think that power understood in this way \u201clies on the street\u201d and is easy to reach for, having certain competences to influence reality. Autonomous AI may as well do it. However, if we understand power \u201cin human terms\u201d, then for me power is primarily about satisfying the need for meaning, recognition, attention, and I do not see any possibility that AI would be equipped with the kind of needs that would determine it to reach for power, to act revel in its own omnipotence.\u00a0\u00a0 So far, there are no indications that would let us believe that deprivations will appear within AI, such as the need for recognition, protection of one\u2019s own dignity, or the feeling of superiority over others. These are typically human characteristics. Therefore, it seems to me that power understood as satisfying the need to dominate others will remain a completely human domain.<\/p>\n<p><strong>Mariusz Wojewoda:<\/strong>\u00a0Bacon juxtaposes power with knowledge. There is a pattern of thinking that whoever has an advantage in intelligence will also have an advantage over those who are less intelligent. Since I am more intelligent than animals, I decide what they should do, not the other way around. To put it very simply, it is about the influence of \u201chigher\u201d organisms on \u201clower\u201d ones. The extrapolation is that if AI is smarter, it will rule the world, or we will decide to let it rule the world because it is smarter and solves problems better than us.<\/p>\n<p>To what extent will AI and wisdom be related to selflessness and to the the idea that those who are more intelligent are also more responsible? It is a matter of our expectations, because that is how it should generally be \u2013 those who have an intellectual advantage should feel more responsible for others.<\/p>\n<p><strong>Ewa Szwarczy\u0144ska: Thank you for this fascinating conversation.<\/strong><\/p>\n<p><strong>Mariusz Wojewoda, Krzysztof Wieczorek:<\/strong>\u00a0Thank you.<\/p>\n<\/div>\r\n                    <\/div>\r\n                <\/div>[\/vc_column][\/vc_row]<\/p>","protected":false},"excerpt":{"rendered":"<p>[vc_row][vc_column][vc_empty_space][\/vc_column][\/vc_row][vc_row][vc_column width=&#8221;1\/3&#8243; css=&#8221;.vc_custom_1620308023905{background-color: #eaeaea !important;}&#8221;][vc_btn title=&#8221;\u2018FREEDOM OF RESEARCH \u2013 SCIENCE FOR THE FUTURE\u2019 SERIES&#8221; style=&#8221;classic&#8221; shape=&#8221;square&#8221; color=&#8221;blue&#8221; size=&#8221;sm&#8221; align=&#8221;right&#8221; css=&#8221;.vc_custom_1723811463414{margin-top: 4px !important;margin-right: 0px !important;border-top-width: 0px !important;border-right-width: 0px !important;padding-top: 0px !important;padding-right: 0px !important;}&#8221; link=&#8221;url:https%3A%2F%2Fus.edu.pl%2Finicjatywadoskonalosci%2Fswoboda-badan-nauka-dla-przyszlosci%2F|||&#8221;][\/vc_column][vc_column width=&#8221;2\/3&#8243;][vc_column_text css=&#8221;.vc_custom_1723811383924{margin-bottom: 0px !important;border-bottom-width: 0px !important;padding-bottom: 0px !important;padding-left: 10px !important;}&#8221;] Prof. Mariusz Wojewoda and Prof. Krzysztof Wieczorek [\/vc_column_text][vc_row_inner css=&#8221;.vc_custom_1620304473772{margin-top: 0px !important;border-top-width: 0px [&#8230;]<\/p>\n<p><a class=\"btn btn-secondary understrap-read-more-link\" href=\"https:\/\/us.edu.pl\/idb\/en\/swoboda-badan-filozofia-sztucznej-inteligencji\/\">Read More&#8230;<\/a><\/p>\n","protected":false},"author":33,"featured_media":3275,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_expiration-date-status":"","_expiration-date":0,"_expiration-date-type":"","_expiration-date-categories":[],"_expiration-date-options":[]},"categories":[10,14],"tags":[],"acf":[],"_links":{"self":[{"href":"https:\/\/us.edu.pl\/idb\/en\/wp-json\/wp\/v2\/posts\/3274"}],"collection":[{"href":"https:\/\/us.edu.pl\/idb\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/us.edu.pl\/idb\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/us.edu.pl\/idb\/en\/wp-json\/wp\/v2\/users\/33"}],"replies":[{"embeddable":true,"href":"https:\/\/us.edu.pl\/idb\/en\/wp-json\/wp\/v2\/comments?post=3274"}],"version-history":[{"count":2,"href":"https:\/\/us.edu.pl\/idb\/en\/wp-json\/wp\/v2\/posts\/3274\/revisions"}],"predecessor-version":[{"id":3298,"href":"https:\/\/us.edu.pl\/idb\/en\/wp-json\/wp\/v2\/posts\/3274\/revisions\/3298"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/us.edu.pl\/idb\/en\/wp-json\/wp\/v2\/media\/3275"}],"wp:attachment":[{"href":"https:\/\/us.edu.pl\/idb\/en\/wp-json\/wp\/v2\/media?parent=3274"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/us.edu.pl\/idb\/en\/wp-json\/wp\/v2\/categories?post=3274"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/us.edu.pl\/idb\/en\/wp-json\/wp\/v2\/tags?post=3274"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}