The growing populariy and adoption of differential privacy in academic and industrial settings has resulted in the development of increasingly sophisticated algorithms for releasing information while preserving privacy. Accompanying this phenomenon is the natural rise in the development and publication of incorrect algorithms, thus demonstrating the necessity of formal verification tools. However, existing formal methods for differential privacy face a dilemma: methods based on customized logics can verify sophisticated algorithms but comes with a steep learning curve and significant annotation burden on the programmers; while existing type systems lacks expressiveness for some sophisticated algorithms. In this paper, we present AutoPriv, a simple imperative language that strikes a better balance between expressive power and usefulness. The core of AutoPriv is a novel relational type system that separates relational reasoning from privacy budget calculations. With dependent types, the type system is powerful enough to verify sophisticated algorithms where the composition theorem falls short. In addition, the inference engine of AutoPriv infers most of the proof details, and even searches for the proof with minimal privacy cost when multiple proofs exist. We show that AutoPriv verifies sophisticated algorithms with little manual effort.
Fri 20 JanDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
16:30 - 17:45
|LMS-Verify: Abstraction Without Regret for Verified Systems Programming|
|Hypercollecting Semantics and its Application to Static Analysis of Information Flow|
|LightDP: Towards Automating Differential Privacy Proofs|