This post is also available in: heעברית (Hebrew)

Some experts claim that there is no current evidence that AI can be controlled safely. And if so, should it even be developed?

AI Safety expert Dr. Roman V. Yampolskiy explains in his book “AI: Unexplainable, Unpredictable, Uncontrollable” that the problem of AI control is one of the most important problems facing humanity, but even so it remains poorly understood, poorly defined, and poorly researched.

Dr. Yampolskiy claims he couldn’t find any proof in AI scientific literature that AI can be safely controlled—and the partial existing controls are not enough. He claims that it is wrong to assume that the AI control problem is solvable and warns that it is important to show that the problem is solvable before embarking on AI’s integration into all our daily lives.

Yampolskiy further states that humans’ ability to produce intelligent software far outstrips their ability to control or even verify it, and suggests advanced intelligent systems can never be fully controllable and so will always present a certain level of risk regardless of the benefit they provide.

According to Techxplore, AI and superintelligence differ from other programs in their ability to learn new behaviors, adjust their performance, and act semi-autonomously in novel situations. As AI’s capabilities increase so does its autonomy, and with it our control over it decreases, and Yampolskiy claims that increased autonomy is synonymous with decreased safety.

Furthermore, AI either cannot explain what it has decided, or we cannot understand the explanation given, and if we do not understand AI’s decisions, we cannot understand the problem and reduce the likelihood of future accidents.

Yampolskiy suggests a compromise in which we sacrifice some capability in return for some control, at the cost of providing a system with a certain degree of autonomy.

To minimize the risk of AI, Yampolskiy says it needs it to be modifiable with ‘undo’ options that are limitable, transparent and easy to understand. He further suggests that all AI should be categorized as controllable or uncontrollable and that we should consider even partial bans on certain types of AI technology.

He concludes by stating “We may not ever get to 100% safe AI, but we can make AI safer in proportion to our efforts, which is a lot better than doing nothing. We need to use this opportunity wisely.”