AI (Artificial Intelligence) and Philosophy
(Diary of an Old AI Researcher who is still Programming)
18 July 2019
I gave an invited talk at the session titled `AI and Philosophy' in JSAI (Japanese Society for Artificial Intelligence) 2019.
Following is a brief summary of my talk.
AI and Philosophy
Koichi Hori (University of Tokyo)
I do not use PowerPoint. I have been claiming `Since AI researchers have their own technologies, let's stop using PowerPoint and let's use our own systems
to make our presentations'.
However, unfortunately, I cannot find many followers.
I make my presentation using my own system named `KNC (Knowledge Nebula Crystallizer)' made by myself.
Today, I use the KNC version 2018.
This KNC2018 calculates the distances, using Wasserstein distance, among the fragments of my research notes
stacked during about these twenty years, and it shows relevant
notes dynamically according to the context.
First of all, regarding the title `AI and Philosophy',
I should say that it is obvious that AI has deep relations with philosophy,
because AI is the discipline to study and make intelligence, and
philosophy has long been the central discipline to study what is intelligence.
Today, I want to talk more about what we should study now in the collaboration
of AI and philosophy rather than what had been studied in the past.
Since this is the talk in JSAI, it may not be necessary to talk what AI is.
But I would like to ask what you think AI is.
In the book titled `What is AI?' published by Kindai-kagaku-sha in 2016
(in Japanese), thirteen authors gave different definitions of AI.
I was one of the authors, and I gave the definition that AI means a
new world of intelligence, where humans and machines are mixed.
Mass-media often conveys the image of AI that replaces one person,
but, in reality, many different types of functions of machines
will be ubiquitously and invisibly embedded in human society,
which will form
a new world of intelligence, where the boundaries between humans
and machines will be blurred.
This new world of the mixture of humans and machines is not the story
of science fictions but is really emerging. For example, even now,
some outcomes of some activities may have many complex causes distributed
among humans and machines, and it is difficult to identify the causes.
Considering the relation between a man and a tool, we, the old AI researchers,
have studied the philosophy of Heidegger or Husserl.
As for the autonomy of an independent person and his/her free will, we
have studied the philosophy of Kant.
We have always enjoyed the discussions with the philosophers
who teach us Heidegger,
Husserl, or Kant.
Nowadays, I think we should go a step further beyond the enjoyable discussions
to the substantial discussions on the relations among humans and machines,
because the above mentioned problem of the blurring boundary between
humans and machines is really emerging in our society.
This discussion among AI researchers and philosophers is becoming
more and more important and necessary.
Sometimes, people, especially the people in the industries, regard
the discussions on the philosophical or ethical problems as something
to depress the free development of technologies.
I would like to say that this is totally wrong. I can say that
the discussions on the philosophical or ethical problems do not depress
but do promote the development of technologies. They can even lead to
many business chances. Why? Because, thinking the ethical problems
leads to the consideration on what people accept willingly,
and, as the results, to the development of products that sell well.
In addition to the researchers, the business people should also
study the philosophical and ethical problems.
Fortunately, good books have been published recently.
I recommend the following two books.
Kukita, Kanzaki, Sasaki: Introduction of Ethics from Robotics,
Nagoya University Press, 2017 (in Japanese)
Tatsuhiko Yamamoto (ed): AI and the Constitution, NihonKeizaiShinbun, 2018 (in Japanese).
The cover page of `AI and the Constitution' evocatively says
`Crisis of Humans: AI selects humans'.
If you respond simply that such an evocation is nonsense,
you are not qualified to study AI.
We should deeply understand why the experts in humanities feel the crisis,
and we should consider how the technology can give answers.
The technological answers will, perhaps, evoke new questions in humanities,
And we will consider new answers to the new questions.
We, the AI researchers, should continue these cyclic discussions with
the experts in humanities and with the ordinary citizens.
If you want to study the ethical problems of technologies more deeply
in the context of postphenomenology, I would like to recommend the
Peter-Paul Verbeek: Moralizing Technology - Understanding and Designing the Morality of Things, The University of Chicago Press, 2011.
You can learn where the ethics of technology come from, reading this book.
Let me list here other bibliographies, too.
Jun'ichi Murata: Philosophy of Technology, Iwanami, 2009 (in Japanese).
Yoichiro Murakami: Death of Civilization / Rebirth of Culture, Iwanami, 2006 (in Japanese).
Arisa Ema: How to Walk in the AI Society, Kagaku-Doujin, 2019 (in Japanese).
Fukuda, Hayashi, Narihara: The Society Connected by AI, Koubundo, 2017 (in Japanese).
Susumu Hirano: Robot Law, Koubundo, 2017 (in Japanese).
Yanaga and Shishido (eds): Robot, AI, and Law, Yuuhikaku, 2018 (in Japanese).
I would like not to suggest here anything about what are the important
topics that you, the young researchers, should study.
In AI studies, the power of the young researchers is important for
opening new worlds.
So, please find the topics you like by yourself.
As for me, I am thinking of the problem of the `liquefaction' of
the important concepts in humanities.
I have used the term `liquefaction' in the context of studying the
dynamic change of knowledge, in my AI studies.
The liquefaction means that the extension and the intension of a concept
Perhaps, we should study the liquefaction of the following concepts.
- `self as a moral agent'
- 'free will of an autonomous individual'
- `life and death'
Until today, the main work of the philosophers has been the analysis
of the world, but, I hope they, the philosophers, will work with us,
the AI researchers, to synthesize the possible new worlds.
Of course, it is not us, the researchers, who should decide which will
be the preferable new world. It should be determined in the democratic
process. I am afraid the current world, where the power of the
huge companies is too strong, is not a sound world.
Considering what I can do as an engineer, I would like to
develop `grass-roots AI networks'.
I want to show diverse possibilities of building new worlds,
from which the citizens can choose the preferable world.
We need not determine which is the best.
There should be diverse worlds.
We, AI researchers, can contribute to build many different kinds of
intelligent worlds for supporting and enhancing different cultures.
Fortunately, I can still write codes. So, I would like to continue to
work with the young researchers to build new preferable worlds of
Please do not worry too much about writing papers, but, at least sometimes,
please consider deeply what kind of world you want to build.
(First of all, let's stop using PowerPoint. :-) )
After my talk, I inputted the questions and comments from the audience
into my system, and I enjoyed the discussion with the audience showing
the output from my system which gave the relevant concepts and ideas
in my research notes.
© 2019 Koichi Hori