Hi, I'm Florian, ...

... a postdoctoral computer security researcher at Imperial College London.

I am interested in diverse aspects of distributed systems security as well as information security and privacy. My goal is to enhance the security of existing systems as well as to build secure systems from the ground up. Whenever appropriate, I am interested in the formalization of secure systems and corresponding security properties. Realizing that users are often the weakest link in the security chain, I am also interested in usability aspects of security and privacy technology.

I am working with Peter Pietzuch in the Large-Scale Data & Systems Group.


Also see Google Scholar and DBLP.
Florian Kelbert, Alexander Pretschner: Data Usage Control for Distributed Systems, ACM Transactions on Privacy and Security (TOPS) (Formerly TISSEC), Conditionally accepted (undergoing minor revisions).
Data usage control enables data owners to enforce policies over how their data may be used after it has been released and accessed. We address distributed aspects of this problem, which arise if the protected data resides within multiple systems. We contribute by formalizing, implementing, and evaluating a fully decentralized system that (i) generically and transparently tracks protected data across systems, (ii) propagates data usage policies along, and (iii) efficiently and preventively enforces policies in a decentralized manner. The evaluation shows that (i) data flow tracking and policy propagation achieve a throughput of 21%–54% of native execution, and (ii) decentralized policy enforcement outperforms a centralized approach in many situations.
Mohsen Ahmadvand, Alexander Pretschner, Florian Kelbert: A Taxonomy of Software Integrity Protection Techniques, Advances in Computers, Conditionally accepted (undergoing minor revisions).
Tampering with software by Man-At-The-End (MATE) attackers is an attack that can lead to security circumvention, privacy violation, reputation damage and revenue loss. In this model, adversaries are end users who have full control over software as well as its execution environment. This full control enables them to tamper with programs to their benefit and to the detriment of software vendors or other end-users. Software integrity protection research seeks for means to mitigate those attacks. Since the seminal work of Aucsmith, a great deal of research effort has been devoted to fight MATE attacks, and many protection schemes were designed by both academia and industry. Advances in trusted hardware, such as TPM and Intel SGX, have also enabled researchers to utilize such technologies for additional protection. Despite the introduction of various protection schemes, there is no comprehensive comparison study that points out advantages and disadvantages of different schemes. Constraints of different schemes and their applicability in various industrial settings have not been studied. More importantly, except for some partial classifications, to the best of our knowledge, there is no taxonomy of integrity protection techniques. These limitations have left practitioners in doubt about effectiveness and applicability of such schemes to their infrastructure. In this work, we propose a taxonomy that captures protection processes by encompassing system, defense and attack perspectives. Later, we carry out a survey and map reviewed papers on our taxonomy. Finally, we correlate different dimensions of the taxonomy and discuss observations along with research gaps in the field.
Severin Kacianka, Kristian Beckers, Florian Kelbert, Prachi Kumari: How Accountability is Implemented and Understood in Research Tools: A Systematic Mapping Study, 18th International Conference on Product-Focused Software Process Improvement (PROFES), November 2017, Innsbruck, Austria.
[Context/ Background]: With the increasing use of cyber-physical systems in complex socio-technical setups, mechanisms that hold specific entities accountable for safety and security incidents are needed. Although there exist models that try to capture and formalize accountability concepts, many of these lack practical implementations. We hence know little about how accountability mechanisms work in practice and how specific entities could be held responsible for incidents. [Goal]: As a step towards the practical implementation of providing accountability, this systematic mapping study investigates existing implementations of accountability concepts with the goal to (1) identify a common definition of accountability and (2) identify the general trend of practical research. [Method]: To survey the literature for existing implementations, we conducted a systematic mapping study. [Results]: We thus contribute by providing a systematic overview of current accountability realizations and requirements for future accountability approaches. [Conclusions]: We find that existing practical accountability research lacks a common definition of accountability in the first place. The research field seems rather scattered with no generally accepted architecture and/or set of requirements. While most accountability implementations focus on privacy and security, no safety-related approaches seem to exist. Furthermore, we did not find excessive references to relevant and related concepts such as reasoning, log analysis and causality.
Joshua Lind, Ittay Eyal, Florian Kelbert, Oded Naor, Peter Pietzuch, Emin Gun Sirer: Teechain: Scalable Blockchain Payments using Trusted Execution Environments, arXiv:1707.05454, July 2017.
Blockchain protocols such as Bitcoin are gaining traction for exchanging payments in a secure and decentralized manner. Their need to achieve consensus across a large number of participants, however, fundamentally limits their performance. We describe Teechain, a new off-chain payment protocol that utilizes trusted execution environments (TEEs) to perform secure, efficient and scalable fund transfers on top of a blockchain, with asynchronous blockchain access. Teechain introduces secure payment chains to route payments across multiple payment channels. Teechain mitigates failures of TEEs with two strategies: (i) backups to persistent storage and (ii) a novel variant of chain-replication. We evaluate an implementation of Teechain using Intel SGX as the TEE and the operational Bitcoin blockchain. Our prototype achieves orders of magnitude improvement in most metrics compared to existing implementations of payment channels: with replicated Teechain nodes in a trans-atlantic deployment, we measure a throughput of over 33,000 transactions per second with 0.1 second latency.
author = {Joshua Lind and Ittay Eyal and Florian Kelbert and Oded Naor and Peter Pietzuch and Emin G{\"{u}}n Sirer},
title = {{Teechain: Scalable Blockchain Payments using Trusted Execution Environments}},
journal = {CoRR},
volume = {abs/1707.05454},
year = {2017},
url = {\url{http://arxiv.org/abs/1707.05454}}
Joshua Lind, Christian Priebe, Divya Muthukumaran, Dan O'Keeffe, Pierre-Louis Aublin, Florian Kelbert, Tobias Reiher, David Goltzsche, David Eyers, Rüdiger Kapitza, Christof Fetzer, Peter Pietzuch: Glamdring: Automatic Application Partitioning for Intel SGX, In 2017 USENIX Annual Technical Conference (ATC), July 2017, Santa Clara, CA, USA.
Trusted execution support in modern CPUs, as offered by Intel SGX enclaves, can protect applications in untrusted environments. While prior work has shown that legacy applications can run in their entirety inside enclaves, this results in a large trusted computing base (TCB). Instead, we explore an approach in which we partition an application and use an enclave to protect only security-sensitive data and functions, thus obtaining a smaller TCB.
We describe Glamdring, the first source-level partitioning framework that secures applications written in C using Intel SGX. A developer first annotates security-sensitive application data. Glamdring then automatically partitions the application into untrusted and enclave parts: (i) to preserve data confidentiality, Glamdring uses dataflow analysis to identify functions that may be exposed to sensitive data; (ii) for data integrity, it uses backward slicing to identify functions that may affect sensitive data. Glamdring then places security-sensitive functions inside the enclave, and adds runtime checks and cryptographic operations at the enclave boundary to protect it from attack. Our evaluation of Glamdring with the Memcached store, the LibreSSL library, and the Digital Bitbox bitcoin wallet shows that it achieves small TCB sizes and has acceptable performance overheads.
@inproceedings {Lind2017Glamdring,
author = {Joshua Lind and Christian Priebe and Divya Muthukumaran and Dan O{\textquoteright}Keeffe and Pierre-Louis Aublin and Florian Kelbert and Tobias Reiher and David Goltzsche and David Eyers and R{\"u}diger Kapitza and Christof Fetzer and Peter Pietzuch},
title = {{Glamdring: Automatic Application Partitioning for Intel SGX}},
booktitle = {2017 USENIX Annual Technical Conference (USENIX ATC 17)},
year = {2017},
isbn = {978-1-931971-38-6},
address = {Santa Clara, CA},
pages = {285--298},
url = {\url{https://www.usenix.org/conference/atc17/technical-sessions/presentation/lind}},
publisher = {USENIX Association},
Pierre-Louis Aublin, Florian Kelbert, Dan O'Keeffe, Divya Muthukumaran, Christian Priebe, Joshua Lind, Robert Krahn, Christof Fetzer, David Eyers, Peter Pietzuch: Poster: LibSEAL: Detecting Service Integrity Violations Using Trusted Execution, Poster at the Twelfth European Conference on Computer Systems (EuroSys), April 2017, Belgrade, Serbia.
Internet users have become reliant on a swathe of online services for everyday tasks and expect them to uphold service integrity. However, data loss or corruption do happen despite service providers’ best efforts. In such cases, users often have little recourse. Our goal is to strengthen the position of users by helping them to discover and prove integrity violations by Internet services.
LibSEAL is a SEcure Audit Library for Internet services that (i) transparently creates a non-repudiable audit log of service operations and (ii) checks invariants over that log to discover service integrity violations. LibSEAL protects the confidentiality of code and data by executing inside an Intel SGX trusted execution environment (called enclave). LibSEAL securely and effectively discovers service integrity violations, while reducing throughput by at most 32%.
author = {Pierre-Louis Aublin and Florian Kelbert and Dan O'Keeffe and Divya Muthukumaran and Christian Priebe and Joshua Lind and Robert Krahn and Christof Fetzer and David Eyers and Peter Pietzuch},
title = {{Poster: LibSEAL: Detecting Service Integrity Violations Using Trusted Execution},
booktitle = {Proceedings of the Twelfth European Conference on Computer Systems},
series = {EuroSys '17},
year = 2017,
month = apr,
location = {Belgrade, Serbia},
publisher = {ACM},
address = {New York, NY, USA},
Pierre-Louis Aublin, Florian Kelbert, Dan O'Keeffe, Divya Muthukumaran, Christian Priebe, Joshua Lind, Robert Krahn, Christof Fetzer, David Eyers, Peter Pietzuch: TaLoS: Secure and Transparent TLS Termination inside SGX Enclaves, Imperial College London, Technical Report 2017/5, March 2017.
We introduce TaLoS, a drop-in replacement for existing transport layer security (TLS) libraries that protects itself from a malicious environment by running inside an Intel SGX trusted execution environment. By minimising the amount of enclave transitions and reducing the overhead of the remaining enclave transitions, TaLoS imposes an overhead of no more than 31% in our evaluation with the Apache web server and the Squid proxy.
title = {{TaLoS: Secure and Transparent TLS Termination inside SGX Enclaves}},
author = {Pierre-Louis Aublin and Florian Kelbert and Dan O'Keeffe and Divya Muthukumaran and Christian Priebe and Joshua Lind and Robert Krahn and Christof Fetzer and David Eyers and Peter Pietzuch},
year = 2017,
month = mar,
institution = {Imperial College London},
number = {2017/5},
note = {Technical Report, \url{https://www.doc.ic.ac.uk/research/technicalreports/2017/#5}}
Florian Kelbert, Franz Gregor, Rafael Pires, Stefan Köpsell, Marcelo Pasin, Aurélien Havet, Valerio Schiavoni, Pascal Felber, Christof Fetzer, Peter Pietzuch: SecureCloud: Secure Big Data Processing in Untrusted Clouds, In Proc. 2017 Design, Automation & Test in Europe Conference & Exhibition (DATE), March 2017, Lausanne, Switzerland.
We present the SecureCloud EU Horizon 2020 project, whose goal is to enable new big data applications that use sensitive data in the cloud without compromising data security and privacy. For this, SecureCloud designs and develops a layered architecture that allows for (i) the secure creation and deployment of secure micro-services; (ii) the secure integration of individual micro-services to full-fledged big data applications; and (iii) the secure execution of these applications within untrusted cloud environments. To provide security guarantees, SecureCloud leverages novel security mechanisms present in recent commodity CPUs, in particular, Intel's Software Guard Extensions (SGX). SecureCloud applies this architecture to big data applications in the context of smart grids. We describe the SecureCloud approach, initial results, and considered use cases.
author={Florian Kelbert and Franz Gregor and Rafael Pires and Stefan Köpsell and Marcelo Pasin and Aurélien Havet and Valerio Schiavoni and Pascal Felber and Christof Fetzer and Peter Pietzuch},
booktitle={Design, Automation Test in Europe Conference Exhibition (DATE), 2017},
title={{SecureCloud: Secure Big Data Processing in Untrusted Clouds}},
keywords={Cloud computing;Containers;Encryption;Hardware;Program processors},
Florian Kelbert, Alexander Fromm: Compliance Monitoring of Third-Party Applications in Online Social Networks, In 2016 IEEE Symposium on Security and Privacy Workshops (International Workshop on Privacy Engineering, IWPE), May 2016, San Jose, CA, USA.
With the widespread adoption of Online Social Networks (OSNs), users increasingly also use corresponding third-party applications (TPAs), such as social games and applications for collaboration. To improve their social experience, TPAs access users’ personal data via an API provided by the OSN. Applications are then expected to comply with certain security and privacy policies when handling the users’ data. However, in practice, they might store, use, and distribute that data in all kinds of unapproved ways. We present an approach that transparently enforces security and privacy policies on TPAs that integrate with OSNs. To this end, we integrate concepts and implementations from the research areas of data usage control and information flow control. We instantiate these results in the context of TPAs in OSNs in order to enforce compliance with security and privacy policies that are provided by the OSN operator. We perform a preliminary evaluation of our approach on the basis of a TPA that integrates with the Facebook API.
author={Florian Kelbert and Alexander Fromm},
booktitle={2016 IEEE Security and Privacy Workshops (SPW)},
title={{Compliance Monitoring of Third-Party Applications in Online Social Networks}},
keywords={application program interfaces;data privacy;social networking (online);API;OSN;compliance monitoring;online social network;privacy policy;security policy;third-party application;Data privacy;Databases;Engines;Facebook;Monitoring;Privacy;Security;compliance;data usage control;online social networks;privacy policies;third-party applications},
Severin Kacianka, Florian Kelbert, Alexander Pretschner: Towards a Unified Model of Accountability Infrastructures, In 1st Workshop on Causal Reasoning for Embedded and safety-critical Systems Technologies (CREST), April 2016, Eindhoven, The Netherlands.
Accountability aims to provide explanations for why unwanted situations occurred, thus providing means to assign responsibility and liability. As such, accountability has slightly different meanings across the sciences. In computer science, our focus is on providing explanations for technical systems, in particular if they interact with their physical environment using sensors and actuators and may do serious harm. Accountability is relevant when considering safety, security and privacy properties and we realize that all these incarnations are facets of the same core idea. Hence, in this paper we motivate and propose a model for accountability infrastructures that is expressive enough to capture all of these domains. At its core, this model leverages formal causality models from the literature in order to provide a solid reasoning framework. We show how this model can be instantiated for several real-world use cases.
author = {Severin Kacianka and Florian Kelbert and Alexander Pretschner},
title = {{Towards a Unified Model of Accountability Infrastructures}},
booktitle = {1st Workshop on Causal Reasoning for Embedded and safety-critical Systems Technologies (CREST)},
doi = {10.4204/EPTCS.224.5},
year = 2016,
url = {http://arxiv.org/abs/1608.07882},
Florian Kelbert: Data Usage Control for Distributed Systems, Dissertation, Technical University of Munich, Germany, 253 pages, March 2016.
This thesis is concerned with controlling the usage of sensitive data once it has been disseminated to multiple systems. To this end a formal model for distributed data usage control is proposed, allowing to track data flows across systems and to enforce distributed data usage policies in a decentral manner. The correctness of the provided formal methods is proven. Further, the proposed ideas are implemented and evaluated in terms of security as well as communication and performance overheads.
@phdthesis {Kelbert:2016:Thesis,
author = {Kelbert, Florian Manuel},
title = {{Data Usage Control for Distributed Systems}},
type = {Dissertation},
school = {Technische Universität M{\"{u}}nchen},
address = {M{\"{u}}nchen},
month = mar,
year = 2016
Florian Kelbert, Alexander Pretschner: A Fully Decentralized Data Usage Control Enforcement Infrastructure, In Proc. 13th International Conference on Applied Cryptography and Network Security (ACNS), Springer LNCS 9092, pages 409-430, June 2015, New York City, NY, USA.
Distributed data usage control enables data owners to constrain how their data is used by remote entities. However, many data usage policies refer to events happening within several distributed systems, e.g. "at each point in time at most two clerks might have a local copy of this contract", or "a contract must be approved by at least two clerks before it is sent to the customer". While such policies can intuitively be enforced using a centralized infrastructure, major drawbacks are that such solutions constitute a single point of failure and that they are expected to cause heavy communication and performance overhead. Hence, we present the first fully decentralized infrastructure for the preventive enforcement of data usage policies. We provide a thorough evaluation of our infrastructure and show in which scenarios it is superior to a centralized approach.
booktitle={Applied Cryptography and Network Security},
series={Lecture Notes in Computer Science},
editor={Malkin, Tal and Kolesnikov, Vladimir and Lewko, Allison Bishop and Polychronakis, Michalis},
title={{A Fully Decentralized Data Usage Control Enforcement Infrastructure}},
publisher={Springer International Publishing},
author={Kelbert, Florian and Pretschner, Alexander},
Florian Kelbert, Alexander Pretschner: Decentralized Distributed Data Usage Control, In Proc. 13th International Conference on Cryptology and Network Security (CANS), Springer LNCS 8813, pages 353-369, October 2014, Heraklion, Crete, Greece.
Data usage control provides mechanisms for data owners to remain in control over how their data is used after it is has been shared. Many data usage policies can only be enforced on a global scale, as they refer to data usage events happening within multiple distributed systems: ‘not more than three employees may ever read this document’, or ‘no copy of this document may be modified after it has been archived’. While such global policies can be enforced by a centralized enforcement infrastructure that observes all data usage events in all relevant systems, such a strategy involves heavy communication. We show how the overall coordination overhead can be reduced by deploying a decentralized enforcement infrastructure. Our contributions are: (i) a formal distributed data usage control system model; (ii) formal methods for identifying all systems relevant for evaluating a given policy; (iii) identification of situations in which no coordination between systems is necessary without compromising policy enforcement; (iv) proofs of correctness of (ii, iii).
booktitle={Cryptology and Network Security},
series={Lecture Notes in Computer Science},
editor={Gritzalis, Dimitris and Kiayias, Aggelos and Askoxylakis, Ioannis},
title={{Decentralized Distributed Data Usage Control}},
publisher={Springer International Publishing},
author={Kelbert, Florian and Pretschner, Alexander},
Enrico Lovat, Florian Kelbert: Structure Matters - A new Approach for Data Flow Tracking, In 2014 IEEE Symposium on Security and Privacy Workshops (5th International Workshop on Data Usage Management, DUMA), May 2014, San Jose, CA, USA.
Usage control (UC) is concerned with how data may or may not be used after initial access has been granted. UC requirements are expressed in terms of data (e.g. a picture, a song) which exist within a system in forms of different technical representations (containers, e.g. files, memory locations, windows). A model combining UC enforcement with data flow tracking across containers has been proposed in the literature, but it exhibits a high false positives detection rate. In this paper we propose a refined approach for data flow tracking that mitigates this overapproximation problem by leveraging information about the inherent structure of the data being tracked. We propose a formal model and show some exemplary instantiations.
author={Lovat, Enrico and Kelbert, Florian},
booktitle={IEEE Security and Privacy Workshops (SPW)},
title={{Structure Matters - A New Approach for Data Flow Tracking}},
keywords={data flow tracking, data structure, usage control},
Florian Kelbert: Data Usage Control for the Cloud, In Proc. 13th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid), Doctoral Symposium, pages 156-159, May 2013, Delft, The Netherlands. Best Poster Award.
Despite the increasing adoption of cloud-based services, concerns regarding the proper future usage and storage of data given to such services remain: Once sensitive data has been released to a cloud service, users often do not know which other organizations or services get access and may store, use or redistribute their data. The research field of usage control tackles such problems by enforcing requirements on the usage of data after it has been given away and is thus particularly important in the cloud ecosystem. So far, research has mainly focused on enforcing such requirements within single systems. This PhD thesis investigates the distributed aspects of usage control, with the goal to enforce usage control requirements on data that flows between systems, services and applications that may be distributed logically, physically and organizationally. To this end, this thesis contributes by tackling four related subproblems: (1) tracking data flows across systems and propagating corresponding data usage policies, (2) taking distributed policy decisions, (3) investigating adaptivity of today's systems and services, and (4) providing appropriate guarantees. The conceptual results of this PhD thesis will be implemented and instantiated to cloud services, thus contributing to their trustworthiness and acceptance by providing security guarantees for the future usage of sensitive data. The results will be evaluated w.r.t. provided security guarantees, practicability, usability, and performance.
author = {Kelbert, Florian},
title = {{Data Usage Control for the Cloud}},
booktitle = {Proceedings of the 13th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing},
series = {CCGrid '13},
year = {2013},
isbn = {978-1-4673-6465-2},
location = {Delft, The Netherlands},
pages = {156--159},
numpages = {4},
url = {http://dx.doi.org/10.1109/CCGrid.2013.35},
doi = {10.1109/CCGrid.2013.35},
publisher = {IEEE}
acatech (Ed.): Privatheit im Internet. Chancen wahrnehmen, Risiken einschätzen, Vertrauen gestalten / Internet Privacy. Taking opportunities, assessing risks, building trust, acatech POSITION PAPER, ISBN 978-3-642-37979-6, 36 pages, May 2013.
Mit seinen digitalen Marktplätzen, Suchmaschinen, sozialen Netzwerken und vielen anderen Diensten kann das Internet zur Verwirklichung grundlegender europäischer Werte beitragen: freie Selbstbestimmung, politische Partizipation und wirtschaftliches Wohlergehen der Menschen. Allerdings bezahlen Nutzerinnen und Nutzer Internet-Dienste häufig mit Daten statt mit Geld, wodurch ihre Privatheit infrage gestellt wird. Angesichts dieser Spannung zeigt acatech, wie eine Internet-Kultur entwickelt werden kann, die es erlaubt, die Chancen des Internets wahrzunehmen und dabei die Privatheit der Menschen schützt. Diese achatech Position enthält konkrete Handlungsempfehlungen, wie Bildung, Wirtschaft, Recht und Technik zu einer solchen Kultur beitragen können.
editor = {{acatech (Ed.)}},
title = {{Privatheit im Internet. Chancen wahrnehmen, Risiken einschätzen, Vertrauen gestalten}},
year = {2013},
month = may,
isbn = {978-3-642-37979-6},
numpages = {36},
doi = {10.1007/978-3-642-37980-2},
series = {acatech POSITION PAPER},
publisher = {Springer Vieweg}
With its online marketplaces, search engines, social networks and multitude of other services, the Internet can contribute to upholding the basic European values of free self-determination, political participation and economic wellbeing for all citizens. However, the fact that users of online services often pay for them with their personal data instead of money can pose a threat to their privacy. acatech shows how the resulting tensions can be addressed by developing an Internet culture where it is possible to protect people’s privacy while still making the most of the opportunities offered by the Internet. This acatech POSITION PAPER outlines concrete recommendations for how education, business, regulation and technology can contribute to building this culture.
editor = {{acatech (Ed.)}}
title = {{Internet Privacy. Taking opportunities, assessing risks, building trust}},
year = {2013},
month = may,
numpages = {34},
doi = {10.1007/978-3-642-37980-2},
series = {acatech POSITION PAPER},
publisher = {Springer Vieweg}
acatech (Ed.): Internet Privacy. Options for adequate realisation, acatech STUDY, ISBN 978-3-642-37912-3, 112 pages, May 2013.
A thorough multidisciplinary analysis of various perspectives on internet privacy was published as the first volume of a study, revealing the results of the acatech project “Internet Privacy – A Culture of Privacy and Trust on the Internet.” The second publication from this project presents integrated, interdisciplinary options for improving privacy on the Internet utilising a normative, value-oriented approach. The ways in which privacy promotes and preconditions fundamental societal values and how privacy violations endanger the flourishing of said values are exemplified. The conditions which must be fulfilled in order to achieve a culture of privacy and trust on the Internet are illuminated. This volume presents options for policy-makers, educators, businesses and technology experts how to facilitate solutions for more privacy on the internet and identifies further research requirements in this area.
editor = {{acatech (Ed.)}}
title = {{Internet Privacy. Options for adequate realisation}},
year = {2013},
month = may,
isbn = {978-3-642-37912-3},
numpages = {112},
doi = {10.1007/978-3-642-37913-0},
publisher = {Springer Vieweg}
Florian Kelbert, Alexander Pretschner: Data Usage Control Enforcement in Distributed Systems, Proc. 3rd ACM Conference on Data and Application Security and Privacy (CODASPY), pages 71-82, February 2013, San Antonio, TX, USA.
Distributed usage control is concerned with how data may or may not be used in distributed system environments after initial access has been granted. If data flows through a distributed system, there exist multiple copies of the data on different client machines. Usage constraints then have to be enforced for all these clients. We extend a generic model for intra-system data flow tracking—that has been designed and used to track the existence of copies of data on single clients—to the cross-system case. When transferring, i.e., copying, data from one machine to another, our model makes it possible to (1) transfer usage control policies along with the data to the end of local enforcement at the receiving end, and (2) to be aware of the existence of copies of the data in the distributed system. As one example, we concretize “transfer of data” to the Transmission Control Protocol (TCP). Based on this concretized model, we develop a distributed usage control enforcement infrastructure that generically and application-independently extends the scope of usage control enforcement to any system receiving usage-controlled data. We instantiate and implement our work for OpenBSD and evaluate its security and performance.
author = {Kelbert, Florian and Pretschner, Alexander},
title = {{Data Usage Control Enforcement in Distributed Systems}},
booktitle = {Proceedings of the Third ACM Conference on Data and Application Security and Privacy},
series = {CODASPY '13},
year = {2013},
isbn = {978-1-4503-1890-7},
location = {San Antonio, Texas, USA},
pages = {71--82},
numpages = {12},
url = {http://doi.acm.org/10.1145/2435349.2435358},
doi = {10.1145/2435349.2435358},
acmid = {2435358},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {data flow tracking, distributed usage control, policy enforcement, security and privacy, sticky policies},
Alexander Fromm, Florian Kelbert, Alexander Pretschner: Data Protection in a Cloud-Enabled Smart Grid, Proc. First International Workshop on Smart Grid Security (SmartGridSec), Springer LNCS 7823, pages 96-107, December 2012, Berlin, Germany.
Today’s electricity grid is evolving into the smart grid which ought to be reliable, flexible, efficient, and sustainable. To fulfill these requirements, the smart grid draws on a plenty of core technologies. Advanced Metering Infrastructure (AMI). These technologies facilitate easy and fast accumulation of different data, e.g. fine-grained meter readings. Various security and privacy concerns w.r.t. the gathered data arise, since research has shown that it is possible to deduce and extract user behaviour from smart meter readings. Hence, these meter readings are very sensitive and require appropriate protection.
Unlike other data protection approaches that are primarily based on data obfuscation and data encryption, we introduce a usage control based data protection mechanism for the smart grid. We show how the concept of distributed data usage control can be integrated with smart grid services and concretize this approach for an energy marketplace that runs on a cloud platform for performance, scalability, and economic reasons.
author={Fromm, Alexander and Kelbert, Florian and Pretschner, Alexander},
title={{Data Protection in a Cloud-Enabled Smart Grid}},
booktitle={Smart Grid Security},
series={Lecture Notes in Computer Science},
editor={Cuellar, Jorge},
publisher={Springer Berlin Heidelberg}
Florian Kelbert, Fatemeh Shirazi, Hervais Simo, Tobias Wüchner, Johannes Buchmann, Alexander Pretschner, Michael Waidner: State of Online Privacy: A Technical Perspective, In: Johannes Buchmann (Ed.): Internet Privacy. Eine multidisziplinäre Bestandsaufnahme/ A multidisciplinary analysis (acatech STUDIE), ISBN 978-3-642-31942-6, pages 189-279, September 2012.
Recent years have seen an unprecedented growth of Internet­-­­based applications and offerings that have a huge impact on individuals’ daily lives and organisations’ (businesses and governments) practices. These applications are bound to bring large-scale data collection, long-term storage, and systematic sharing of data across various data controllers i.e., individuals, partner organizations, and scientists. This creates new privacy issues. For instance, emerging Internet-based applications and the underlying technologies provide new ways to track and profile individual users across multiple Internet domains, often without their knowledge or consent. In this section, we present the current state of privacy on the Internet. The section proposes a review and analysis of current threats to individual privacy on the Internet as well as existing countermeasures. Our analysis considers five emerging Internet-based applications, namely personalized web and E-commerce services, online social networks, cloud computing applications, cyber-physical systems, and Big data. It outlines privacy-threatening techniques, with a focus on those applications. We conclude with a discussion on technologies that could help address different types of privacy threats and thus support privacy on the Web.
booktitle={Internet Privacy. Eine multidisziplin{\"a}re Bestandsaufnahme/ A multidisciplinary analysis},
series={acatech Studie},
editor={Buchmann, Johannes},
title={State of Online Privacy: A Technical Perspective},
publisher={Springer Berlin Heidelberg},
author={Kelbert, Florian and Shirazi, Fatemeh and Simo, Hervais and W\"{u}chner, Tobias and Buchmann, Johannes and Pretschner, Alexander and Waidner, Michael},
Florian Kelbert, Alexander Pretschner: Towards a Policy Enforcement Infrastructure for Distributed Usage Control, Proc. 17th ACM Symposium on Access Control Models and Technologies (SACMAT), pages 119-122, June 2012, Newark, NJ, USA.
Distributed usage control is concerned with how data may or may not be used after initial access to it has been granted and is therefore particularly important in distributed system environments. We present an application- and application-protocol-independent infrastructure that allows for the enforcement of usage control policies in a distributed environment. We instantiate the infrastructure for transferring files using FTP and for a scenario where smart meters are connected to a Facebook application.
author = {Kelbert, Florian and Pretschner, Alexander},
title = {{Towards a Policy Enforcement Infrastructure for Distributed Usage Control}},
booktitle = {Proceedings of the 17th ACM Symposium on Access Control Models and Technologies},
series = {SACMAT '12},
year = {2012},
month = jun,
isbn = {978-1-4503-1295-0},
location = {Newark, New Jersey, USA},
pages = {119--122},
numpages = {4},
url = {http://doi.acm.org/10.1145/2295136.2295159},
doi = {10.1145/2295136.2295159},
acmid = {2295159},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {distributed usage control, policy enforcement, security and privacy, sticky policies}
Prachi Kumari, Florian Kelbert, Alexander Pretschner: Data Protection in Heterogeneous Distributed Systems: A Smart Meter Example, Proc. Dependable Software for Critical Infrastructures (DSCI), October 2011, Berlin, Germany.
Usage control is concerned with how data is used after access has been granted. Enforcement mechanisms have been implemented for distributed systems like web based social networks (WBSN) at various levels of abstraction. We extend data usage control to heterogeneous distributed systems by implementing a policy enforcement mechanism for a smart meter connected to a WBSN. The idea is to provide users an opportunity to share their energy usage and other related data within their social group while maintaining control over further usage of that data. The implementation borrows from an existing usage control framework for a common web browser.
author = {Kumari, P. and Kelbert, F. and Pretschner, A.},
title = {{Data Protection in Heterogeneous Distributed Systems: A Smart Meter Example}},
booktitle = {Proc. Workshop on Dependable Software for Critical Infrastructures. GI Lecture Notes in Informatics},
month = oct,
year = {2011}
Florian Kelbert: Authorization Constraints in Workflow Management Systemen, Diploma Thesis, Ulm University, Germany, 229 pages, June 2010.
Workflow-Management-Systeme (WfMS) werden zunehmend zur Modellierung und Ausführung umfangreicher Geschäftsprozesse eingesetzt. Da die Ausführung der Prozesse durch viele unterschiedliche Benutzer erfolgt, entsteht die Notwendigkeit komplexer Sicherheitsanforderungen, auch Authorization Constraints genannt.
In heutigen WfMS werden an Benutzer Berechtigungen vergeben, die zur Ausführung einzelner Aktivitäten eines Prozesses berechtigen. Authorization Constraints ermöglichen dagegen den Einsatz komplexerer Berechtigungen. Bekannte Beispiele sind Separation of Duty (SoD) und Binding of Duty (BoD). Während SoD die Ausführung zweier oder mehrerer Aktivitäten durch unterschiedliche Benutzer fordert, müssen diese bei BoD durch denselben Benutzer ausgeführt werden.
Diese Arbeit beschäftigt sich mit dem Einsatz von Authorization Constraints in WfMS und geht dabei insbesondere auf deren Durchsetzung und Validierung ein. Ziel der Durchsetzung ist zu gewährleisten, dass alle Authorization Constraints zur Modellierzeit und zur Laufzeit des Prozesses eingehalten werden. Die Validierung hat zum Ziel, Widersprüche zwischen Authorization Constraints zu finden. Außerdem beschäftigt sich die Validierung mit der Frage, ob die Ausführung aller Aktivitäten eines Prozesses, bei gleichzeitiger Durchsetzung aller Authorization Constraints, möglich ist.
Zunächst werden hierzu die in WfMS relevanten Authorization Constraints vorgestellt und kategorisiert. Um die Durchsetzung und Validierung der unterschiedlichen Authorization Constraints auf eine einheitliche Weise zu ermöglichen, wird ein Modell namens MOnK eingeführt. MOnK ermöglicht die einheitliche Modellierung der unterschiedlichen Authorization Constraints. Die Durchsetzung und Validierung wird dann auf Basis sogenannter MOnK-Constraints vorgenommen. Dabei ist zu beachten, dass unterschiedliche Arten von Authorization Constraints existieren, deren Durchsetzung und Validierung teilweise zur Modellierzeit, teilweise aber auch erst zur Laufzeit des Prozesses möglich ist. Daher werden in dieser Arbeit Vorgehensweisen vorgestellt, um MOnK-Constraints sowohl zur Modellierzeit, als auch zur Laufzeit eines Prozesses durchsetzen und validieren zu können.
Exemplarisch wurden die Modellierung der Authorization Constraints durch MOnK-Constraints, sowie die zur Modellierzeit durchführbaren Teile der Durchsetzung und Validierung implementiert.
author = {Florian Kelbert},
title = {{Authorization Constraints in Workflow Management Systemen}},
school = {Ulm University, Germany},
year = 2010,
month = jun


PhD Mentor

  • Mohsen Ahmadvand: Software Integrity Protection, Technical University of Munich, Germany, In progress since February 2017.

Supervised Theses

  • Automatic Generation of Secure and Usable Mnemonic Passphrases, Master's Thesis, Technical University of Munich, Germany, 136 pages, May 2016.
  • Securing Data Usage Control Infrastructures, Master's Thesis, Technical University of Munich, Germany, 161 pages, August 2015.
  • Monitoring Compliance of Third-Party Applications in Online Social Networks, Bachelor's Thesis, Technical University of Munich, Germany, 49 pages, October 2014.
  • A Comprehensive Usage Control System for Distributed Usage Control, Master's Thesis, University of Kaiserslautern, Germany, 129 pages, September 2011.


  • Foundations of Program and System Development (WS15, WS14)
  • Programming (WS11, SS11, WS10)


  • Security Engineering (WS13, WS12, SS11)


  • Human-Centered Security (WS15)
  • Evaluating Security Approaches (SS14)
  • Secure IT Systems (WS11)


  • Introduction to Software Engineering (SS15, SS14)
  • Foundations of Programming (WS13)

Ongoing projects

Past projects

Awards and Service


  • Best Poster Award, 13th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid), May 2013, Delft, The Netherlands.

Invited Talks

  • Verteilte Daten-Nutzungskontrolle: Potenziale und Herausforderungen. In CAST-Workshop "Technischer Datenschutz - Trends und Herausforderungen", CAST e.V., April 2013, Darmstadt, Germany.