Models and Games for Quantifying Vulnerability of Secret Information

Piotr Mardziel

Quantitative information flow (QIF) is concerned with measuring the amount of secret information that leaks through a system’s observable behavior during its execution. The system takes secret (high) input and produces (low) output that can be observed by an adversary. Before the system is run, the adversary is assumed to have some a priori information about the secret. As the system executes, the adversary’s observations are combined with knowledge about how the system works, resulting in some a posteriori information about the secret. A general principle of QIF states that the leakage of information by the execution is defined as the increase in the adversary’s information. Past work has studied how to precisely instantiate this principle, considering various notions of information and how they relate to each other, and increasingly powerful adversaries. For example, active adversaries may be allowed to provide (low) inputs to the system, to manipulate it to leak more data, and adaptive adversaries may choose these inputs based on the observable behavior of the system. Most approaches to QIF are limited in three regards: 1) assumption of static (unchanging) secrets, 2) focus only on the goals of the adversary (as opposed to the defender or secret holder), and 3) consideration of only passive defenders.

a) Dynamic secrets: QIF models and analyses typically assume that secret information is static. But real-world secrets evolve over time. Passwords, for example, should be changed periodically. Cryptographic keys have periods after which they must be retired. Memory offsets in address space randomization techniques are periodically regenerated. Medical diagnoses evolve, military convoys move, and mobile phones travel with their owners. Leaking the current value of these secrets is undesirable. But if information leaks about how these secrets change, adversaries might also be able to predict future secrets or infer past secrets. For example, an adversary who learns how people choose their passwords might have an advantage in guessing future passwords. Similarly, an adversary who learns a trajectory can infer future locations. So it is not just the current value of a secret that matters, but also how the secret changes. Methods for quantifying leakage and protecting secrets should, therefore, account for these dynamics. In recent work [3] we initiated the study of quantitative information flow for dynamic secrets.

b) Defender vs. Adversary Preference: Most approaches to QIF consider leakage only from the adversary’s point of view, whereas the goals and concerns of the defender—i.e., the party interested in protecting information—are overlooked. While in many cases the adversary’s gain is directly and inversely related to the defender’s loss, this is not always the case. In recent work [2] we explore this point of view, that is, that the actual leakage of information of an execution is linked to the defender’s loss of secrecy and not necessarily the adversary’s gain of information.

c) Active Defender: Finally, the behavior of the owner of secret information, the defender, is usually assumed to be passive, either oblivious to the power or actions of the adversary or without any ability at all to influence the flow of information. In recent and ongoing work [1] we analyze the selection and the attack on secrets from a game-theoretic perspective, with both the defender and the adversary responding to the ability and preference of each other.