When Should Machines Make Decisions? Russian Tech| AI

 

Control of AI by the person: people should determine the procedure and the degree of the need for the AI ​​system to pass the decision functions to fulfill the goals set by the person.

When can we let the car choose for us? Most of us allow Google maps to choose the best path to a new location. Many of us are agitated by the fact that cars-drones can take us independently anywhere while we work or dream. But are you ready to let your unmanned car choose the route for you?

The car can calculate that your ultimate goal is to eat, buy something or fulfill some errand, but sometimes we want to choose some other store or restaurant, and in this case we do not want the car to make a decision for us.

How about more complex solutions? Should weapons be allowed to choose whom to kill? If so, on what grounds do they make this choice? And how will we control the process when artificial intelligence becomes smarter than a human being? What if AI knows more about the world around us, and better understands our preferences, will it be better if he takes all for us?

It is difficult to answer such questions. I talked with two experts in the field of studying AI, and they answered my questions like this: “Yes, it’s difficult” and “That’s right, it’s very, very difficult.”

Everyone with whom I talked about this, agreed that the problem of of machines affects some of the most difficult aspects of creating AI.

“I think it’s really important,” said Susan Crowe, a professor at Robert Gordon University in Aberdeen. “Otherwise, you will have a system that wants to do something for you, even if at the moment you do not need it, there will be situations when you will not be satisfied with how and by what way the system does something.”

Joshua Green, a psychologist at Harvard, moved on to the most important issues related to this principle.

“It’s interesting, because it’s not entirely clear what it means to break this rule,” Green explained. “What decision can the AI ​​system take that would not be somehow embedded in the system by a person?” AI is a human creation. This principle is in fact more connected with what concrete decisions we will allow machines to take. For example, we can allow the machine to make decisions, but whatever decisions it takes, we want to be sure that this decision is made by the machine, not by the person.

“Take, for example, a robot that can walk. After all, the person who manages this robot will not set each corner of the foot movement of this robot. People will not decide exactly where each foot will land, but people will say: “I like a machine that takes such decisions on its own until it conflicts with some other command of a higher level of complexity.”

Roman Yampolsky, an AI researcher from the University of Louisville, suggested that we are closer to giving power to the AI ​​in decision-making than many can imagine.

“In fact, we have already entrusted control to decisions in many areas of machines,” said Yampolsky. “More than 85% of all exchange transactions are made on the basis of AI, the work of power plants, nuclear reactors, and power grids is monitored. Through the AI, the traffic lights are coordinated, and in some cases the AI ​​provides a military response to a nuclear attack, that is, a “dead hand”. The complexity and speed of such processes exclude significant control from the person. We are simply not fast enough to instantly understand the processes in such things as, for example, algorithmic trading or what is happening now with military drone. We are also unable to calculate thousands of variables or understand complex mathematical models. Our dependence on machines will only increase. But only as long as they make the right decisions (decisions that might be possible to us, whether we are smart enough, or if we had the right amount of data and enough time), and at such times we agree with them. We would like to have the opportunity to interfere with the decisions of the machines only when these decisions contradict ours. But the attempt to highlight such situations at the moment is an unsolved problem, which we now pay attention to. ”

Greene also outlined this idea: “The danger is that you have machines that make more complex and consistent decisions than” how to make a move. ” When you have a machine that can take deliberate and flexible decisions, how can you confidently give it only one order without explaining the details of the execution? When you give a commission to your employee, and you have some problem that needs to be solved, and you say to him: “Just do it”, you will not specify: “But do not kill anyone. Do not break any laws and do not spend all the money of the company, trying to solve this minor problem. ” There are many factors that we do not specify, and they seem insignificant, but nevertheless they are very important. ”

“I like the spirit of this principle. He clearly states what follows from the more general idea of ​​responsibility that each decision is taken by an individual or specifically delegated to the machine. But it will be difficult to implement when the AI ​​begins to behave more flexibly and thoughtfully. ”

Trust and responsibility

AI is often compared with the child, in terms of what level the system has reached in its training and how much it is trained. And just like with children, we do not dare give the car too much self-control until we are sure that the cars have become sufficiently responsible and safe. We can quite trust the AI ​​when it comes to maps, financial trading and the operation of electrical networks, but how can such a trend continue in the future when the AI ​​system becomes even more complex when our security and well-being can be compromised.

John Havens, Executive Director of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, explained: “People need to be sufficiently informed on many issues to monitor the decision-making process until the AI ​​becomes safe enough for us.

“Studies show that now is the most unsafe time, because you can often hear something like:” Just sit and do nothing, the system works for you in 99% of cases, you’re okay. ” This is the most dangerous situation, “he added, referring to recent studies that show that people do not pay due attention when sufficiently reliable autonomous systems, like modern machines with autopilot, break down. The research shows that at these moments it is difficult for people to fix a problem and they can not cope with it.

“I think people should rely only on themselves first of all,” concluded Havens.

Patrick Lin, a philosopher from the California Polytechnic University, in addition to the problems mentioned above, believes that it is unclear who will be responsible if the machines do something wrong.

“I’m not saying that everywhere and always a person should control cars,” Lin said. “It all depends on a solution that can give rise to new problems … This is due to the idea of ​​controlling machines and accepting human responsibility. If the machines do not have human control, then the question arises who is responsible … the context matters. We need to understand that everything depends on what decisions we are talking about, based on this we need to determine the level of human control over the machines.

Susan Schneider, a philosopher from the University of Connecticut, also worried about how these problems can be exacerbated by the coming of the supermind into our lives.

“Even now it is sometimes difficult to understand why the system has made this or that decision,” she said, adding later: “If we do decide to impose an independent decision on AI, I do not know how we can check and control this system. The old approach to the problem of checking systems is no longer suitable. ”

what do you think?

Should people constantly monitor the decisions of the machines? Is it possible? When does the moment come when the machine should take control of itself? There are times when cars are able to make a more correct decision and even protect us, but is this the main thing? In which situations do you prefer to make decisions on your own, and when do you give preference to AI?

The post When Should Machines Make Decisions? Russian appeared first on Future of Life Institute.

 

You might also like More from author

Leave A Reply

Your email address will not be published.