A Logical Approach for Automated Reasoning About Privacy in Security Protocols

Project Details

Layman's description

Imagine that you are using an application to access your medical record. The data contain your medical history as well as your social security number, date of birth, address and other personally identifiable information. What can go wrong is somebody gets ahold of your medical record without your consent? Similar concerns arise for digital health in general, mobile payments, transport with smart cards, electronic voting etc. Privacy is relevant for virtually any application handling data. Therefore, studying privacy is critical, especially considering the increasing digitalization of applications. In order to protect sensitive information, it is crucial to have strong guarantees that systems respect privacy. New digital applications need to be secured and protected against any misuse, such as surveillance, profiling, stalking, or coercion (e.g., a doctor should be able to make prescriptions without pressure from pharmaceutical companies).

One way to formally specify how systems and applications work is to model them with security protocols. They are protocols defining how messages are exchanged between several parties, often relying on cryptographic operations. This could represent how your phone communicates when using a contact-tracing application. The typical approach for verifying privacy is to define a list of properties to check. However, this is very technical and one cannot be sure that all relevant properties have been identified. This means that some attacks can still occur in reality despite successful verification. Instead, we want to reason about the knowledge an attacker has, and what implications they can derive from this knowledge. In order to do that, we characterize privacy properties using logical formulas. This approach allows for expressing privacy in an intuitive and natural way, where one states explicitly the information that is intentionally disclosed. Then an attack corresponds to learning anything more than this information, through logical deductions.

The main goal of this PhD project is to develop new methods for automated verification of privacy. The idea is to give as input the specification of a security protocol and the information intentionally disclosed. Then a program can check whether there are any attacks. With this tool, we will be able to study existing systems to mathematically prove that either they respect privacy or they violate it (by showing an attack). Moreover, this can also be used during the development of applications with a privacy-by-design mindset. In the end, this research will empower users with the control of their data by giving society the means to design privacy-friendly systems. If the application has been successfully verified, then you can be absolutely sure that the protocol respects your privacy.
StatusFinished
Effective start/end date01/12/202130/11/2024

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.