< Back to previous page

Project

Static and dynamic countermeasures for implementation-level vulnerabilities.

Modern-day imperative programming languages such as C++, C# and Java offer protection facilities such as abstract data types, field access modifiers, and module systems. Such abstractions were mainly designed to enforce software engineering principles such as information hiding and encapsulation, but they can also be used to enforce security properties of programs. Unfortunately, these source-level security properties are typically lost during compilation to low-level machine code. For instance, access to private instance fields is restricted by the programming language's type system at the source-code level, but such restrictions are not in place at the assembly level. This can leave a software module vulnerable to attacks at the assembly level, such as code-injection attacks and kernel-level malware.

In the first part of this dissertation, we present a compilation scheme that is fully abstract, which means that the high-level security properties of a software module are maintained after compilation. We formalize this property as preservation of contextual equivalence, which means that two assembly-level compiled modules should only be distinguishable from each other by another module interacting with them, when their original source-level modules can also be distinguished from each other by a source-level module. In other words, a fully abstract compiler ensures that any possible assembly-level interaction is explainable at the source-code level. This effectively reduces the power of an assembly-level attacker to the power of a source-level attacker. To achieve this strong security property, the compiler relies on the presence of a fine-grained, program counter-based memory access protection primitive, as part of the assembly-level target language. We formalize our compilation scheme, prove that it is fully abstract and we show by means of a prototype implementation that the assumed memory access protection primitive can be realized efficiently on modern commodity hardware.

In the second part of this dissertation, we discuss the sound modular verification of imperative programs executing in an unverified context. We focus on Hoare logic-based software verification techniques, which enable developers to statically prove correctness and safety properties of imperative programs. Unfortunately, the runtime guarantees offered by such verification techniques are relatively limited when the verified codebase is part of a program that also contains unverified code. In particular, unverified code might not behave as assumed by the verifier, leading to failure of all verified assertions that were based on those assumptions. This is particularly troublesome in memory-unsafe languages, where a memory safety error in unverified code can corrupt the runtime state of verified code.

We have developed a series of runtime checks to be inserted at the boundary between the verified and unverified parts of a program, which check that the unverified code behaves as expected. A key problem that we had to solve, is how to ensure that memory errors or malicious code in the unverified part cannot corrupt the state of the verified part. We solve this problem in two steps. Firstly, the inserted boundary checks perform an integrity check on the heap memory owned by the verified code, to ensure that bad writes to heap memory by the unverified part are detected upon (re-)entry to verified code. Secondly, we use the mechanism of fully abstract compilation developed in the first part of this dissertation, for the integrity protection of the verified code's local variables and control flow metadata on the runtime call stack. The combination of these protection measures results in a very strong modular soundness guarantee: no verified assertion in the verified codebase will ever fail at runtime, even if the verified codebase interacts with unverified code. We formalize the developed program transformations, prove that they are sound and precise, and we show by means of micro and macro benchmarks that the performance overhead of the boundary checks is sufficiently low for practical applicability.

Date:3 Oct 2011 →  30 Sep 2015
Keywords:Countermeasures, Vulnerabilities, Implementation
Disciplines:Applied mathematics in specific fields, Computer architecture and networks, Distributed computing, Information sciences, Information systems, Programming languages, Scientific computing, Theoretical computer science, Visual computing, Other information and computing sciences
Project type:PhD project