In this paper, I first discuss what AI is.
Many people think that AI is some `thing' to replace a person.
However, in reality,
AI systems are not things to replace persons but
are many types of functions to
be embedded ubiquitously and invisibly in the society.
Then I argue that the boundary between humans and machines will be
blurred. In every aspect of life, the functions of humans and the
functions of machines will be mixed and will form complex systems.
Hence, it will become difficult, for example, to articulate the causes of
The blurred boundary between humans and machines leads to the
problem of redefining important concepts in human society.
When a person may behave induced by smart nudges embedded in the society,
to what extent can we say that he/she behaved based on his/her own free will?
If a person behaves partially based on his/her free will and partially induced
by smart machines around him/her, how will or will not the concept
of responsibility change?
I argue that many important concepts of human society will be forced
to be `liquefied', meaning that the extension of each concept
will not be fixed but dynamically changed.
The concepts that may be liquefied are `tool', `actor and actee of moral act', `free will of autonomous individual', `responsibility', `accountability', `rights', and so on.
Fortunately, many philosophers and jurists are already
discussing the problem of possible changes of important concepts of
We, artificial intelligence researchers, should collaborate with
those experts in humanities.
We should not only study the
concept changes in human society but also should change AI design
according to the change of the concepts in humanities.
That is, AI will cause the change of the concepts in humanities,
and the change of the concepts
in humanities will cause the change of AI design;
and this cycle should continue.
For the moment, how can we change the design of AI?
For example, if the concept of responsibility cannot be limited in humans and
can be liquefied and distributed among the networked humans and machines,
we should, perhaps, design new machines to observe and identify
the distribution. This means that we should develop AI systems to solve the problems
caused by AI systems.
The authors are now working on these new AI systems.