Learning about AI
- What is AI?
- AI Opportunities and Risks
- Q.01
What is personal information?
- Q.02
What is the purpose of the principles and guidelines for AI?
- Q.03
If an AI causes an accident, who is responsible?
- Q.04
Does AI have the same rights as humans?
- Q.05
What is information?
- Q.06
What is transhumanism?
- Q.07
What is a cyborg?
- Q.08
What is scoring?
- Q.09
What is AI?
- Q.10
What is AI used for?
- Q.11
What can AI not do?
- Q.12
Is AI taking away human jobs?
- Q.13
Does AI have morality?
- Q.14
When did AI research start?
- Q.15
Does AI have a mind?
Q.What is personal information?
Personal information is information about a living individual containing a name, date of birth, or other descriptions etc. whereby a specific individual (e.g., Sumiko Kamado, Eiichi Wagatsuma) can be identified. For example, school grades and health checkup results constitute personal information. In today’s society, various types of personal information are collected by governments and companies. Personal information is then compiled into databases to be used by computers, or provided to other companies. AI may also use the personal information contained in these databases, such as by learning and analyzing them. Appropriate use of personal information has the potential to make our lives more convenient, richer, safer, and healthier. On the other hand, if personal information is used inappropriately, it may threaten individual privacy or cause unfair discrimination. Therefore, in order to protect the rights of individuals, the Act on the Protection of Personal Information imposes various obligations on government offices and companies that handle personal information to ensure that personal information is handled appropriately.
Q.What is the purpose of the principles and guidelines for AI?
In recent years, a variety of principles and guidelines related to AI have been established, both in Japan and abroad. For example, in Japan, the Ministry of Internal Affairs and Communications’ The Conference toward AI Network Society has established the “AI Research and Development Guidelines” and “AI Utilization Principles”, and the Cabinet Office’s panel of experts has formulated the “Principles of Human-centric AI society”. Internationally, the OECD, an international organization of advanced nations, and the G20, an intergovernmental forum of the world’s largest economies, have agreed on the “AI Principles”. These principles and guidelines generally expect developers and users of AI to ensure transparency, human control, security and safety, respect for privacy, and respect for human dignity and the individuals. Although these principles and guidelines are not legally enforceable, it is hoped that AI developers and users will develop and use AI with these principles and guidelines in mind, so that AI that people can trust will spread throughout society.
Q.If an AI causes an accident, who is responsible?
If a robot or self-driving car with built-in AI causes an accident, who is responsible? Under current Japanese law, if a car accident results in death or injury, the driver may be penalized if he or she is found to be negligent. Car owners may also be liable to pay compensation for damages to the victims. In addition, if a car happens to be defective and the defect causes an accident and harm to others, the manufacturer of the car may also be liable to pay damages to the victim under the Product Liability Law. So what happens if an accident occurs due to automated driving by AI (level 4 or higher)? In this case, since a human driver is not driving, the driver cannot be held responsible. However, even under current Japanese law, the owner of the car may be held liable for damages to the victim. On the other hand, if there is a defect in the AI or the self-driving car with the built-in AI, and the defect causes an accident and harm to others, the manufacturer of the car may be held liable under the Product Liability Law. In any case, under the current law, it is not very clear as to what kind of liability the manufacturer or owner of the car will be held responsible for in the event of an accident caused by AI automated driving, and it is also necessary to reexamine whether the balance in the sharing of liability is appropriate. With an eye out to the future development and spread of AI in our society, it is also necessary that we reexamine the law.
Q.Does AI have the same rights as humans?
Under current Japanese law, AI is not considered to have the same rights as human beings. Human beings have a variety of rights, but human rights such as freedom of thought and conscience and privacy are rights that only human beings, or more strictly, only individuals, can enjoy. On the other hand, some rights, such as property rights and copyrights, can be enjoyed not only by individuals but also by legal persons such as companies. A legal person is an entity that is not an actual human being, but is legally entitled to the some rights and obligations as a human being. Examples include companies and educational institutions. If we consider AI as a type of legal entity, it may be able to have various rights and obligations, just like a company. In fact, the European Parliament has issued a report that puts forward the idea of considering smart robots with built-in AI as a legal person. This is one of the solutions that have been thought of to solve the problem of liability that arises in the event of an accident caused by such a robot. Although there is no law in Japan today that considers an AI to be a legal person, such a possibility may be considered in the future. Nevertheless, in the future, even if AI comes to own property or assume liability for its accidents, granting AI rights that have been granted only to human beings, such as privacy and the right to vote, would require careful consideration, as it could change the entire legal system that been constructed centered on humans.
Q.What is information?
Many people immediately think of computers when they hear the word “information”. In fact, many people assume that information is data that is processed by computers.
But that is only a small part of what is called information.
When the term “information society” was first coined, computers were not yet widespread. The term “information society” referred to a society where mass media such as newspapers and television became mainstream, and mass media were considered to be the place where information was handled. Furthermore, the term “information literacy” means not only how to use a computer, but also how to conduct research using books. Thus, information is not necessarily limited to those related to computers.
In the first place, information is, in fact, closely related to the value of living things. I encourage you to look into this as well.
Q.What is transhumanism?
Do you ever feel limited in your life? With hard work, you may be able to break through your limits. You may be able to run faster, or move ahead in your studies.
But it seems that it is almost impossible to fly across continents using only your arms and legs, or to memorize all the books in the world. It would be fair to say that there are biological limitations.
The idea of breaking through such limits with science and technology is called transhumanism.
Can we really attain immortality by integrating ourselves with technology? Or do we even want to?
Q.What is a cyborg?
As cyborgs often appear in works of fiction, when you hear the term, mechanized people found in comic books and movies probably come to your mind.
The term “cyborg” itself has been around since 1960. Cyborgs were imagined as a way for humans to live in outer space, where humans would be integrated with a machine to regulate his or her body so that the brain would not atrophy and the lungs would not burst. Cyborgs appeared amid the backdrop of the Cold War between the U.S. and the Soviet Union, and the space race.
There are people with physical disabilities who try to manage their lives by having machines implanted in their bodies.
On the other hand, there are those who seek to enhance their abilities by implanting machines in their bodies. Would you like to have machines implanted in your body, even if you have no physical disabilities?
Q.What is scoring?
As we live our lives, we are sometimes given scores, as in a test. Though it is just a score, we may be happy or frustrated with it, or we may even work hard to get a particular score. Scores may be presented as adjusted standard deviation scores, and may instill one with a sense of superiority or make one suffer from a sense of inferiority.
In scoring, test scores are calculated based on a defined scope, and has clearly defined criteria, unlike descriptive questions which tend to be more ambiguous. Moreover, since the scoring is limited to what is covered by the test, the scores do not reflect the person as a whole.
However, as our behavior has become more and more digitized, behaviors other than test-taking have become subject to scoring. What kind of society will we have when the total score of person A is 900, and the total score of person B is 450?
How would you feel if the government started assigning such scores?
Q.What is AI?
Artificial Intelligence (AI) is the use of computer science and technology to produce machines that are equal to or exceed human intelligence. The results have various applications in our society.
For example, voice assistants installed in smartphones and smart speakers, Internet search engines, camera face recognition and photo editing apps, robot vacuums, machine translation, driver assistance, etc.
However, this gives rise to an important question: what is intelligence? Contemporary AI generally supposes that human intelligence is the faculty of computation or information processing. What do you think about human intelligence? Is it really all about computation?
Q.What is AI used for?
AI is already being used in various situations. For example, Internet search engines, computer games, home appliances such as vacuum cleaners, air conditioners, refrigerators, and microwaves, face recognition functions for cameras, photo editing apps, voice assistants installed in smartphones and smart speakers, and machine translation, car driver assistance, etc. It is also used in marketing and advertising, e-commerce, infrastructure maintenance, security systems, scoring, and military use.
AI today can only perform specific tasks. We have not yet reached the phase where AI can replicate every human capability or exceed the intelligence of humans – this remains in the realm of science fiction.
There are many predictions and prophecies about the future applications of AI, but in reality, no one knows. Researchers are still discussing what applications of AI are possible in principle. It is up to us what kind of AI will be realized and how it will be used in our society. How would you like to see AI developed and applied?
Q.What can AI not do?
AI can do many things, but it is still subject to many limitations. Firstly, AI doesn’t recognize meaning and value. It can, of course, memorize the definitions of words, but it doesn’t make sense of their meaning.
When we say “meaning,” we sometimes refer to “value,” like when you say, “This is meaningful to me.” AI can’t understand the deeper meaning in such a case. The reason is that the information theory, which is the basis of AI today, does not deal with the question of meaning and value. However, if AI is to be used in our society, meaning and value have great importance with regard to effective communication with people and addressing issues of ethics and morals.
Without this ability, how can AI be applied with positive results? Do you think it is possible to create an AI that has a deep level of understanding?
Q.Is AI taking away human jobs?
As times have changed, so have human jobs. The nature of work changes as AI advances. Some jobs become obsolete, and others are created. AI transforms, rather than removes, our jobs. It is said that theoretically AI can replace jobs that can be reduced to calculation, or information processing.
However, the important thing is that humans have free will to decide what they do. In other words, what kind of jobs humans do is not to be decided by technology, but by humans themselves.
If you have a dream, a belief, or a mission, isn’t it the role of AI to help you make it come true? How do you think we can use AI to create a future society where humans can do what they should do and what they want to do?
Q.Does AI have morality?
AI learns and decides what rules it should follow and operates according to those rules. However, it is humans who direct this learning and decide what learning materials to use. In the process of machine learning, humans may also play the role of teachers. Humans set the goals and the rules. Therefore, AI cannot understand morals without human input.
However, are we humans able to teach morality? Do we ourselves have a clear understanding of what morality is?
AI adheres to programmed rules. But the problem is not simply how to program morals. Is there a rule that can be said to be moral if we follow it? How do we know which rules should be followed? What do you think about this?
Q.When did AI research start?
The field of AI research began in America in the 1950s. However, the basic idea of artificial machines endowed with intelligence equal to or smarter than human beings has a long history. It can be traced to at least ancient Greece. In other words, AI is a historical research project which reflects a traditional worldview in Western thought.
Modern AI, which started in the mid-1950s, is a continuous process which meets and overcomes obstacles as it progresses.
The first AI boom occurred in the 1950s when AI demonstrated that the computational machine was capable of logical operations that enabled it to perform mathematical tasks. But the approach suffered a setback, for it could only be applied to the solving of simple puzzles and games. By the early 1980s, AI had advanced to the extent that its application became wider and more effective, and the second boom came. However, it still couldn’t overcome certain fundamental problems, and entered a slower period of development. In the 2010s, big-data and the advancement of machine learning techniques, in particular a statistics method based on the neural network model, brought rapid development in the field. This advance led to success in previously problematic areas. However, it also decreased the logical accuracy of AI. Now, a third boom in AI technology is flourishing.
Q.Does AI have a mind?
Does current AI have its own mind? Will it develop a mind in the future? Researchers have yet to reach a consensus on these questions, firstly, because we do not have a scientifically assured concept of ‘mind’ which leads to many different opinions.
For example, some people say we can create ‘mind’ by reproducing the human brain exactly as it is. But others say mind is imaginary: if we think it exists, it is because of an insufficiency in current science. Also, there are those who argue that even if science progressed and the human brain were reproduced, we would still not be able to create a mind-equipped AI because ‘mind’ is not a physical phenomenon.
There are countless other debates on this topic leading to continued uncertainty. Science has not yet made clear what ‘mind’ is. What Is ‘mind’? This is a very interesting question. Trying to construct an AI-Robot with its own mind is a huge and exciting challenge.
-
7 Important Points
- Individual
-
Implants for health care
- Individual
-
Implants for intelligence augmentation or enhancement
- Individual
-
Career advisors (university, faculty, company, etc.)
- Individual
-
Communication assistants
- Individual
-
Chatbots
- Individual
-
Watching over children
- Group
- Home/Family
-
Housekeeping
- Group
- Home/Family
-
Health care
- Group
- Home/Family
-
Elderly care
- Individual
- Home/Family
7 Important Points
There are seven points which should always be taken into consideration when we think about the opportunities and risks associated with AI. These seven common factors are receiving more and more international attention:
1. An AI oriented towards people
2. Protecting the dignity of individuals
3. Quality
4. Transparency and ease of explanation
5. Accountability
6. Privacy
7. Protecting Safety and Security
Implants for health care
-
Chance
If it is realized, an Implanted AI-robot will be able to check your health any time anywhere. It will also cure illness within your body. This means that you will no longer need to go to hospital.
-
Risk
Since it would be implanted in your body, it would be difficult to maintain and to deal with malfunctions. There are also concerns about invasion of privacy and abuse of personal data.
Implants for intelligence augmentation or enhancement
-
Chance
If it is realized, you would be able to perform tasks which you are doing now with your smartphone, computer, or the internet, just by thinking about them. For example, searching the web, typing texts, sending e-mails, taking pictures etc.
-
Risk
Since it would be implanted in your body, it would be difficult to maintain and to deal with malfunctions. There are also concerns about invasion of privacy and abuse of personal data. Your decisions could also be manipulated from outside.
Career advisors (university, faculty, company, etc.)
-
Chance
Based on data such as your background, abilities, and interests, AI would suggest an educational path and occupation that suits you.
-
Risk
Even if you took AI's advice, you would not necessarily succeed because in principle AI cannot predict the future accurately. There are also concerns about invasion of privacy and abuse of personal data. Your life may be controlled by others with AI.
Communication assistants
-
Chance
AI would give you advice on what to say or how to behave in certain scenarios, for example, when it is hard for you to explain what you are feeling, or when you are in an unfamiliar situation.
-
Risk
If AI failed to properly grasp the situation or suggest the appropriate expression, relying on it could lead to disrupting the communication. There are also concerns about increasing dependence on technology in communication.
Chatbots
-
Chance
AI would support or perform routine tasks for you, such as dealing with inquiries or paperwork.
-
Risk
AI may fail to complete important tasks due to malfunctions. If AI did not learn something fully, it could lead to an incorrect or inappropriate response.
Watching over children
-
Chance
AI robots will watch over the health and safety of your family while you are away from them.
-
Risk
AI robots could misidentify the situation and miss a sudden illness or accident. There are also concerns about invasion of privacy and abuse of personal data.
Housekeeping
-
Chance
AI robots could help you with housework such as cleaning, washing or cooking.
-
Risk
If AI robots failed in learning or situation recognition, relying on them could decrease the efficiency of housework or cause some accidents due to malfunction.
Health care
-
Chance
AI could advise you on healthy meals and sleeping optimal sleeping time based on your health data.
-
Risk
AI could give you inappropriate advice. Invasion of privacy and abuse of personal data are also problematic
Elderly care
-
Chance
AI robots could propose a care plan that is suitable you. They themselves could also care for you.
-
Risk
AI robots may give you inappropriate advice. There is also a risk of accidents during the care. Invasion of privacy and abuse of personal data are also potentially problematic.