Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

SEARCH GUIDE  Download Search Tip PDF File

  • articleFree Access

    Some Impossibilities of Ranking in Generalized Tournaments

    In a generalized tournament, players may have an arbitrary number of matches against each other and the outcome of the games is measured on a cardinal scale with lower and upper bounds. An axiomatic approach is applied to the problem of ranking the competitors. Self-consistency (SC) requires assigning the same rank for players with equivalent results, while a player showing an obviously better performance than another should be ranked strictly higher. According to order preservation (OP), if two players have the same pairwise ranking in two tournaments where the same players have played the same number of matches, then their pairwise ranking is not allowed to change in the aggregated tournament. We reveal that these two properties cannot be satisfied simultaneously on this universal domain.

  • articleNo Access

    Unpredictability of AI: On the Impossibility of Accurately Predicting All Actions of a Smarter Agent

    The young field of AI Safety is still in the process of identifying its challenges and limitations. In this paper, we formally describe one such impossibility result, namely Unpredictability of AI. We prove that it is impossible to precisely and consistently predict what specific actions a smarter-than-human intelligent system will take to achieve its objectives, even if we know the terminal goals of the system. In conclusion, the impact of Unpredictability on AI Safety is discussed.

  • articleNo Access

    Unexplainability and Incomprehensibility of AI

    Explainability and comprehensibility of AI are important requirements for intelligent systems deployed in real-world domains. Users want and frequently need to understand how decisions impacting them are made. Similarly, it is important to understand how an intelligent system functions for safety and security reasons. In this paper, we describe two complementary impossibility results (Unexplainability and Incomprehensibility), essentially showing that advanced AIs would not be able to accurately explain some of their decisions and for the decisions they could explain people would not understand some of those explanations.