Hi, I'm Florian, ...
... a research engineer at Imperial College London.
Working in the Large-Scale Data & Systems Group, I am interested in diverse aspects of
- information security and privacy
- scalable distributed systems
- distributed systems security
- usability aspects of security technology
For more details, please see my projects and publications.
X.509 Certificate
Office 347, Huxley Building
Department of Computing
Imperial College London
South Kensington Campus
London SW7 2AZ, UK
- since 05/2016 Research Associate at Imperial College London, UK
- 06/2012 - 04/2016 Researcher at Technical University of Munich, Germany
- 09/2010 - 05/2012 Researcher at Karlsruhe Institute of Technology, Germany
- 10/2004 - 07/2010 Diploma studies of Computer Science at Ulm University, Germany
Publications
Also see Google Scholar and DBLP.
Data usage control enables data owners to enforce policies over how their data may be used after it has been released and accessed. We address distributed aspects of this problem, which arise if the protected data resides within multiple systems. We contribute by formalizing, implementing, and evaluating a fully decentralized system that (i) generically and transparently tracks protected data across systems, (ii) propagates data usage policies along, and (iii) efficiently and preventively enforces policies in a decentralized manner. The evaluation shows that (i) data flow tracking and policy propagation achieve a throughput of 21%-54% of native execution, and (ii) decentralized policy enforcement outperforms a centralized approach in many situations.
@article{Kelbert2018Data,
author = {Kelbert, Florian and Pretschner, Alexander},
title = {{Data Usage Control for Distributed Systems}},
journal = {ACM Trans. Priv. Secur.},
issue_date = {April 2018},
volume = {21},
number = {3},
month = apr,
year = {2018},
issn = {2471-2566},
pages = {12:1--12:32},
articleno = {12},
numpages = {32},
url = {\url{http://doi.acm.org/10.1145/3183342}},
doi = {10.1145/3183342},
acmid = {3183342},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {Data usage control, data protection, dataflow tracking, distributed systems, policy enforcement, privacy, security},
}
author = {Kelbert, Florian and Pretschner, Alexander},
title = {{Data Usage Control for Distributed Systems}},
journal = {ACM Trans. Priv. Secur.},
issue_date = {April 2018},
volume = {21},
number = {3},
month = apr,
year = {2018},
issn = {2471-2566},
pages = {12:1--12:32},
articleno = {12},
numpages = {32},
url = {\url{http://doi.acm.org/10.1145/3183342}},
doi = {10.1145/3183342},
acmid = {3183342},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {Data usage control, data protection, dataflow tracking, distributed systems, policy enforcement, privacy, security},
}
Users of online services such as messaging, code hosting and collaborative document editing expect the services to uphold the integrity of their data. Despite providers' best efforts, data corruption still occurs, but at present service integrity violations are excluded from SLAs. For providers to include such violations as part of SLAs, the competing requirements of clients and providers must be satisfied. Clients need the ability to independently identify and prove service integrity violations to claim compensation. At the same time, providers must be able to refute spurious claims.
We describe LibSEAL, a SEcure Audit Library for internet services that creates a non-repudiable audit log of service operations and checks invariants to discover violations of service integrity. LibSEAL is a drop-in replacement for TLS libraries used by services, and thus observes and logs all service requests and responses. It runs inside a trusted execution environment, such as Intel SGX, to protect the integrity of the audit log. Logs are stored using an embedded relational database, permitting service invariant violations to be discovered using simple SQL queries. We evaluate LibSEAL with three popular online services (Git, ownCloud and Dropbox) and demonstrate that it is effective in discovering integrity violations, while reducing throughput by at most 14%.
We describe LibSEAL, a SEcure Audit Library for internet services that creates a non-repudiable audit log of service operations and checks invariants to discover violations of service integrity. LibSEAL is a drop-in replacement for TLS libraries used by services, and thus observes and logs all service requests and responses. It runs inside a trusted execution environment, such as Intel SGX, to protect the integrity of the audit log. Logs are stored using an embedded relational database, permitting service invariant violations to be discovered using simple SQL queries. We evaluate LibSEAL with three popular online services (Git, ownCloud and Dropbox) and demonstrate that it is effective in discovering integrity violations, while reducing throughput by at most 14%.
@inproceedings{Aublin2018Libseal,
author = {Aublin, Pierre-Louis and Kelbert, Florian and O'Keeffe, Dan and Muthukumaran, Divya and Priebe, Christian and Lind, Joshua and Krahn, Robert and Fetzer, Christof and Eyers, David and Pietzuch, Peter},
title = {{LibSEAL: Revealing Service Integrity Violations Using Trusted Execution}},
booktitle = {Proceedings of the Thirteenth EuroSys Conference},
series = {EuroSys '18},
year = {2018},
isbn = {978-1-4503-5584-1},
location = {Porto, Portugal},
pages = {24:1--24:15},
articleno = {24},
numpages = {15},
url = {\url{http://doi.acm.org/10.1145/3190508.3190547}},
doi = {10.1145/3190508.3190547},
acmid = {3190547},
publisher = {ACM},
address = {New York, NY, USA},
}
author = {Aublin, Pierre-Louis and Kelbert, Florian and O'Keeffe, Dan and Muthukumaran, Divya and Priebe, Christian and Lind, Joshua and Krahn, Robert and Fetzer, Christof and Eyers, David and Pietzuch, Peter},
title = {{LibSEAL: Revealing Service Integrity Violations Using Trusted Execution}},
booktitle = {Proceedings of the Thirteenth EuroSys Conference},
series = {EuroSys '18},
year = {2018},
isbn = {978-1-4503-5584-1},
location = {Porto, Portugal},
pages = {24:1--24:15},
articleno = {24},
numpages = {15},
url = {\url{http://doi.acm.org/10.1145/3190508.3190547}},
doi = {10.1145/3190508.3190547},
acmid = {3190547},
publisher = {ACM},
address = {New York, NY, USA},
}
Tampering with software by Man-At-The-End (MATE) attackers is an attack that can
lead to security circumvention, privacy violation, reputation damage and revenue loss. In
this model, adversaries are end users who have full control over software as well as its execution environment. This full control enables them to tamper with programs to their benefit
and to the detriment of software vendors or other end-users. Software integrity protection
research seeks for means to mitigate those attacks. Since the seminal work of Aucsmith,
a great deal of research effort has been devoted to fight MATE attacks, and many protection schemes were designed by both academia and industry. Advances in trusted hardware,
such as TPM and Intel SGX, have also enabled researchers to utilize such technologies for
additional protection. Despite the introduction of various protection schemes, there is no
comprehensive comparison study that points out advantages and disadvantages of different
schemes. Constraints of different schemes and their applicability in various industrial settings have not been studied. More importantly, except for some partial classifications, to the
best of our knowledge, there is no taxonomy of integrity protection techniques. These limitations have left practitioners in doubt about effectiveness and applicability of such schemes to
their infrastructure. In this work, we propose a taxonomy that captures protection processes
by encompassing system, defense and attack perspectives. Later, we carry out a survey and
map reviewed papers on our taxonomy. Finally, we correlate different dimensions of the
taxonomy and discuss observations along with research gaps in the field.
@incollection{Ahmadvand2018Taxonomoy,
title = {{A Taxonomy of Software Integrity Protection Techniques}},
series = {Advances in Computers},
publisher = {Elsevier},
year = 2018,
issn = {0065-2458},
doi = {10.1016/bs.adcom.2017.12.007},
url = {\url{http://www.sciencedirect.com/science/article/pii/S0065245817300591}},
author = {Mohsen Ahmadvand and Alexander Pretschner and Florian Kelbert},
keywords = {Tamper-proofing, Integrity protection, Taxonomy, Software protection, Software monetization}
}
title = {{A Taxonomy of Software Integrity Protection Techniques}},
series = {Advances in Computers},
publisher = {Elsevier},
year = 2018,
issn = {0065-2458},
doi = {10.1016/bs.adcom.2017.12.007},
url = {\url{http://www.sciencedirect.com/science/article/pii/S0065245817300591}},
author = {Mohsen Ahmadvand and Alexander Pretschner and Florian Kelbert},
keywords = {Tamper-proofing, Integrity protection, Taxonomy, Software protection, Software monetization}
}
[Context/ Background]: With the increasing use of cyber-physical systems in complex socio-technical setups, mechanisms that hold specific entities accountable for safety and security incidents are needed. Although there exist models that try to capture and formalize accountability concepts, many of these lack practical implementations. We hence know little about how accountability mechanisms work in practice and how specific entities could be held responsible for incidents. [Goal]: As a step towards the practical implementation of providing accountability, this systematic mapping study investigates existing implementations of accountability concepts with the goal to (1) identify a common definition of accountability and (2) identify the general trend of practical research. [Method]: To survey the literature for existing implementations, we conducted a systematic mapping study. [Results]: We thus contribute by providing a systematic overview of current accountability realizations and requirements for future accountability approaches. [Conclusions]: We find that existing practical accountability research lacks a common definition of accountability in the first place. The research field seems rather scattered with no generally accepted architecture and/or set of requirements. While most accountability implementations focus on privacy and security, no safety-related approaches seem to exist. Furthermore, we did not find excessive references to relevant and related concepts such as reasoning, log analysis and causality.
@Inbook{Kacianka2017Accountability,
author={Kacianka, Severin and Beckers, Kristian and Kelbert, Florian and Kumari, Prachi},
editor={Felderer, Michael and M{\'e}ndez Fern{\'a}ndez, Daniel and Turhan, Burak and Kalinowski, Marcos and Sarro, Federica and Winkler, Dietmar},
title={{How Accountability is Implemented and Understood in Research Tools}},
bookTitle={Product-Focused Software Process Improvement: 18th International Conference, PROFES 2017, Innsbruck, Austria, November 29--December 1, 2017, Proceedings},
year={2017},
publisher={Springer International Publishing},
address={Cham},
pages={199--218},
isbn={978-3-319-69926-4},
doi={10.1007/978-3-319-69926-4_15},
url={\url{https://doi.org/10.1007/978-3-319-69926-4_15}}
}
author={Kacianka, Severin and Beckers, Kristian and Kelbert, Florian and Kumari, Prachi},
editor={Felderer, Michael and M{\'e}ndez Fern{\'a}ndez, Daniel and Turhan, Burak and Kalinowski, Marcos and Sarro, Federica and Winkler, Dietmar},
title={{How Accountability is Implemented and Understood in Research Tools}},
bookTitle={Product-Focused Software Process Improvement: 18th International Conference, PROFES 2017, Innsbruck, Austria, November 29--December 1, 2017, Proceedings},
year={2017},
publisher={Springer International Publishing},
address={Cham},
pages={199--218},
isbn={978-3-319-69926-4},
doi={10.1007/978-3-319-69926-4_15},
url={\url{https://doi.org/10.1007/978-3-319-69926-4_15}}
}
Blockchain protocols such as Bitcoin are gaining traction for exchanging payments in a secure and decentralized manner. Their need to achieve consensus across a large number of participants, however, fundamentally limits their performance. We describe Teechain, a new off-chain payment protocol that utilizes trusted execution environments (TEEs) to perform secure, efficient and scalable fund transfers on top of a blockchain, with asynchronous blockchain access. Teechain introduces secure payment chains to route payments across multiple payment channels. Teechain mitigates failures of TEEs with two strategies: (i) backups to persistent storage and (ii) a novel variant of chain-replication. We evaluate an implementation of Teechain using Intel SGX as the TEE and the operational Bitcoin blockchain. Our prototype achieves orders of magnitude improvement in most metrics compared to existing implementations of payment channels: with replicated Teechain nodes in a trans-atlantic deployment, we measure a throughput of over 33,000 transactions per second with 0.1 second latency.
@article{Lind2017Teechain,
author = {Joshua Lind and Ittay Eyal and Florian Kelbert and Oded Naor and Peter Pietzuch and Emin G{\"{u}}n Sirer},
title = {{Teechain: Scalable Blockchain Payments using Trusted Execution Environments}},
journal = {CoRR},
volume = {abs/1707.05454},
year = {2017},
url = {\url{http://arxiv.org/abs/1707.05454}}
}
author = {Joshua Lind and Ittay Eyal and Florian Kelbert and Oded Naor and Peter Pietzuch and Emin G{\"{u}}n Sirer},
title = {{Teechain: Scalable Blockchain Payments using Trusted Execution Environments}},
journal = {CoRR},
volume = {abs/1707.05454},
year = {2017},
url = {\url{http://arxiv.org/abs/1707.05454}}
}
Trusted execution support in modern CPUs, as offered by Intel SGX enclaves, can protect applications in untrusted environments. While prior work has shown that legacy applications can run in their entirety inside enclaves, this results in a large trusted computing base (TCB). Instead, we explore an approach in which we partition an application and use an enclave to protect only security-sensitive data and functions, thus obtaining a smaller TCB.
We describe Glamdring, the first source-level partitioning framework that secures applications written in C using Intel SGX. A developer first annotates security-sensitive application data. Glamdring then automatically partitions the application into untrusted and enclave parts: (i) to preserve data confidentiality, Glamdring uses dataflow analysis to identify functions that may be exposed to sensitive data; (ii) for data integrity, it uses backward slicing to identify functions that may affect sensitive data. Glamdring then places security-sensitive functions inside the enclave, and adds runtime checks and cryptographic operations at the enclave boundary to protect it from attack. Our evaluation of Glamdring with the Memcached store, the LibreSSL library, and the Digital Bitbox bitcoin wallet shows that it achieves small TCB sizes and has acceptable performance overheads.
We describe Glamdring, the first source-level partitioning framework that secures applications written in C using Intel SGX. A developer first annotates security-sensitive application data. Glamdring then automatically partitions the application into untrusted and enclave parts: (i) to preserve data confidentiality, Glamdring uses dataflow analysis to identify functions that may be exposed to sensitive data; (ii) for data integrity, it uses backward slicing to identify functions that may affect sensitive data. Glamdring then places security-sensitive functions inside the enclave, and adds runtime checks and cryptographic operations at the enclave boundary to protect it from attack. Our evaluation of Glamdring with the Memcached store, the LibreSSL library, and the Digital Bitbox bitcoin wallet shows that it achieves small TCB sizes and has acceptable performance overheads.
@inproceedings {Lind2017Glamdring,
author = {Joshua Lind and Christian Priebe and Divya Muthukumaran and Dan O{\textquoteright}Keeffe and Pierre-Louis Aublin and Florian Kelbert and Tobias Reiher and David Goltzsche and David Eyers and R{\"u}diger Kapitza and Christof Fetzer and Peter Pietzuch},
title = {{Glamdring: Automatic Application Partitioning for Intel SGX}},
booktitle = {2017 USENIX Annual Technical Conference (USENIX ATC 17)},
year = {2017},
isbn = {978-1-931971-38-6},
address = {Santa Clara, CA},
pages = {285--298},
url = {\url{https://www.usenix.org/conference/atc17/technical-sessions/presentation/lind}},
publisher = {USENIX Association},
}
author = {Joshua Lind and Christian Priebe and Divya Muthukumaran and Dan O{\textquoteright}Keeffe and Pierre-Louis Aublin and Florian Kelbert and Tobias Reiher and David Goltzsche and David Eyers and R{\"u}diger Kapitza and Christof Fetzer and Peter Pietzuch},
title = {{Glamdring: Automatic Application Partitioning for Intel SGX}},
booktitle = {2017 USENIX Annual Technical Conference (USENIX ATC 17)},
year = {2017},
isbn = {978-1-931971-38-6},
address = {Santa Clara, CA},
pages = {285--298},
url = {\url{https://www.usenix.org/conference/atc17/technical-sessions/presentation/lind}},
publisher = {USENIX Association},
}
Internet users have become reliant on a swathe of online services for everyday tasks and expect them to uphold service integrity. However, data loss or corruption do happen despite service providers’ best efforts. In such cases, users often have little recourse. Our goal is to strengthen the position of users by helping them to discover and prove integrity violations by Internet services.
LibSEAL is a SEcure Audit Library for Internet services that (i) transparently creates a non-repudiable audit log of service operations and (ii) checks invariants over that log to discover service integrity violations. LibSEAL protects the confidentiality of code and data by executing inside an Intel SGX trusted execution environment (called enclave). LibSEAL securely and effectively discovers service integrity violations, while reducing throughput by at most 32%.
LibSEAL is a SEcure Audit Library for Internet services that (i) transparently creates a non-repudiable audit log of service operations and (ii) checks invariants over that log to discover service integrity violations. LibSEAL protects the confidentiality of code and data by executing inside an Intel SGX trusted execution environment (called enclave). LibSEAL securely and effectively discovers service integrity violations, while reducing throughput by at most 32%.
@inproceedings{Aublin2017PosterLibseal,
author = {Pierre-Louis Aublin and Florian Kelbert and Dan O'Keeffe and Divya Muthukumaran and Christian Priebe and Joshua Lind and Robert Krahn and Christof Fetzer and David Eyers and Peter Pietzuch},
title = {{Poster: LibSEAL: Detecting Service Integrity Violations Using Trusted Execution}},
booktitle = {Proceedings of the Twelfth European Conference on Computer Systems},
series = {EuroSys '17},
year = 2017,
month = apr,
location = {Belgrade, Serbia},
publisher = {ACM},
address = {New York, NY, USA},
}
author = {Pierre-Louis Aublin and Florian Kelbert and Dan O'Keeffe and Divya Muthukumaran and Christian Priebe and Joshua Lind and Robert Krahn and Christof Fetzer and David Eyers and Peter Pietzuch},
title = {{Poster: LibSEAL: Detecting Service Integrity Violations Using Trusted Execution}},
booktitle = {Proceedings of the Twelfth European Conference on Computer Systems},
series = {EuroSys '17},
year = 2017,
month = apr,
location = {Belgrade, Serbia},
publisher = {ACM},
address = {New York, NY, USA},
}
We introduce TaLoS, a drop-in replacement for existing transport layer security (TLS) libraries that protects itself from a malicious environment by running inside an Intel SGX trusted execution environment. By minimising the amount of enclave transitions and reducing the overhead of the remaining enclave transitions, TaLoS imposes an overhead of no more than 31% in our evaluation with the Apache web server and the Squid proxy.
@TechReport{Aublin2017Talos,
title = {{TaLoS: Secure and Transparent TLS Termination inside SGX Enclaves}},
author = {Pierre-Louis Aublin and Florian Kelbert and Dan O'Keeffe and Divya Muthukumaran and Christian Priebe and Joshua Lind and Robert Krahn and Christof Fetzer and David Eyers and Peter Pietzuch},
year = 2017,
month = mar,
institution = {Imperial College London},
number = {2017/5},
note = {Technical Report, \url{https://www.doc.ic.ac.uk/research/technicalreports/2017/#5}}
}
title = {{TaLoS: Secure and Transparent TLS Termination inside SGX Enclaves}},
author = {Pierre-Louis Aublin and Florian Kelbert and Dan O'Keeffe and Divya Muthukumaran and Christian Priebe and Joshua Lind and Robert Krahn and Christof Fetzer and David Eyers and Peter Pietzuch},
year = 2017,
month = mar,
institution = {Imperial College London},
number = {2017/5},
note = {Technical Report, \url{https://www.doc.ic.ac.uk/research/technicalreports/2017/#5}}
}
We present the SecureCloud EU Horizon 2020 project, whose goal is to enable new big data applications that use sensitive data in the cloud without compromising data security and privacy. For this, SecureCloud designs and develops a layered architecture that allows for (i) the secure creation and deployment of secure micro-services; (ii) the secure integration of individual micro-services to full-fledged big data applications; and (iii) the secure execution of these applications within untrusted cloud environments. To provide security guarantees, SecureCloud leverages novel security mechanisms present in recent commodity CPUs, in particular, Intel's Software Guard Extensions (SGX). SecureCloud applies this architecture to big data applications in the context of smart grids. We describe the SecureCloud approach, initial results, and considered use cases.
@INPROCEEDINGS{Kelbert2017SecureCloud,
author={Florian Kelbert and Franz Gregor and Rafael Pires and Stefan Köpsell and Marcelo Pasin and Aurélien Havet and Valerio Schiavoni and Pascal Felber and Christof Fetzer and Peter Pietzuch},
booktitle={Design, Automation Test in Europe Conference Exhibition (DATE), 2017},
title={{SecureCloud: Secure Big Data Processing in Untrusted Clouds}},
year=2017,
pages={282--285},
keywords={Cloud computing;Containers;Encryption;Hardware;Program processors},
doi={10.23919/DATE.2017.7926999},
month=mar
}
author={Florian Kelbert and Franz Gregor and Rafael Pires and Stefan Köpsell and Marcelo Pasin and Aurélien Havet and Valerio Schiavoni and Pascal Felber and Christof Fetzer and Peter Pietzuch},
booktitle={Design, Automation Test in Europe Conference Exhibition (DATE), 2017},
title={{SecureCloud: Secure Big Data Processing in Untrusted Clouds}},
year=2017,
pages={282--285},
keywords={Cloud computing;Containers;Encryption;Hardware;Program processors},
doi={10.23919/DATE.2017.7926999},
month=mar
}
With the widespread adoption of Online Social Networks (OSNs), users increasingly also use corresponding third-party applications (TPAs), such as social games and applications for collaboration. To improve their social experience, TPAs access users’ personal data via an API provided by the OSN. Applications are then expected to comply with certain security and privacy policies when handling the users’ data. However, in practice, they might store, use, and distribute that data in all kinds of unapproved ways. We present an approach that transparently enforces security and privacy policies on TPAs that integrate with OSNs. To this end, we integrate concepts and implementations from the research areas of data usage control and information flow control. We instantiate these results in the context of TPAs in OSNs in order to enforce compliance with security and privacy policies that are provided by the OSN operator. We perform a preliminary evaluation of our approach on the basis of a TPA that integrates with the Facebook API.
@INPROCEEDINGS{Kelbert2016Compliance,
author={Florian Kelbert and Alexander Fromm},
booktitle={2016 IEEE Security and Privacy Workshops (SPW)},
title={{Compliance Monitoring of Third-Party Applications in Online Social Networks}},
year=2016,
pages={9--16},
keywords={application program interfaces;data privacy;social networking (online);API;OSN;compliance monitoring;online social network;privacy policy;security policy;third-party application;Data privacy;Databases;Engines;Facebook;Monitoring;Privacy;Security;compliance;data usage control;online social networks;privacy policies;third-party applications},
doi={10.1109/SPW.2016.13},
month=may
}
author={Florian Kelbert and Alexander Fromm},
booktitle={2016 IEEE Security and Privacy Workshops (SPW)},
title={{Compliance Monitoring of Third-Party Applications in Online Social Networks}},
year=2016,
pages={9--16},
keywords={application program interfaces;data privacy;social networking (online);API;OSN;compliance monitoring;online social network;privacy policy;security policy;third-party application;Data privacy;Databases;Engines;Facebook;Monitoring;Privacy;Security;compliance;data usage control;online social networks;privacy policies;third-party applications},
doi={10.1109/SPW.2016.13},
month=may
}
Accountability aims to provide explanations for why unwanted situations occurred, thus providing means to assign responsibility and liability. As such, accountability has slightly different meanings across the sciences. In computer science, our focus is on providing explanations for technical systems, in particular if they interact with their physical environment using sensors and actuators and may do serious harm. Accountability is relevant when considering safety, security and privacy properties and we realize that all these incarnations are facets of the same core idea. Hence, in this paper we motivate and propose a model for accountability infrastructures that is expressive enough to capture all of these domains. At its core, this model leverages formal causality models from the literature in order to provide a solid reasoning framework. We show how this model can be instantiated for several real-world use cases.
@inproceedings{Kacianka2016Towards,
author = {Severin Kacianka and Florian Kelbert and Alexander Pretschner},
title = {{Towards a Unified Model of Accountability Infrastructures}},
booktitle = {1st Workshop on Causal Reasoning for Embedded and safety-critical Systems Technologies (CREST)},
doi = {10.4204/EPTCS.224.5},
year = 2016,
url = {\url{http://arxiv.org/abs/1608.07882}},
}
author = {Severin Kacianka and Florian Kelbert and Alexander Pretschner},
title = {{Towards a Unified Model of Accountability Infrastructures}},
booktitle = {1st Workshop on Causal Reasoning for Embedded and safety-critical Systems Technologies (CREST)},
doi = {10.4204/EPTCS.224.5},
year = 2016,
url = {\url{http://arxiv.org/abs/1608.07882}},
}
This thesis is concerned with controlling the usage of sensitive data once it has been disseminated to multiple systems. To this end a formal model for distributed data usage control is proposed, allowing to track data flows across systems and to enforce distributed data usage policies in a decentral manner. The correctness of the provided formal methods is proven. Further, the proposed ideas are implemented and evaluated in terms of security as well as communication and performance overheads.
@phdthesis {Kelbert:2016:Thesis,
author = {Kelbert, Florian Manuel},
title = {{Data Usage Control for Distributed Systems}},
type = {Dissertation},
school = {Technische Universität M{\"{u}}nchen},
address = {M{\"{u}}nchen},
month = mar,
year = 2016
}
author = {Kelbert, Florian Manuel},
title = {{Data Usage Control for Distributed Systems}},
type = {Dissertation},
school = {Technische Universität M{\"{u}}nchen},
address = {M{\"{u}}nchen},
month = mar,
year = 2016
}
Distributed data usage control enables data owners to constrain how their data is used by remote entities. However, many data usage policies refer to events happening within several distributed systems, e.g. "at each point in time at most two clerks might have a local copy of this contract", or "a contract must be approved by at least two clerks before it is sent to the customer". While such policies can intuitively be enforced using a centralized infrastructure, major drawbacks are that such solutions constitute a single point of failure and that they are expected to cause heavy communication and performance overhead. Hence, we present the first fully decentralized infrastructure for the preventive enforcement of data usage policies. We provide a thorough evaluation of our infrastructure and show in which scenarios it is superior to a centralized approach.
@incollection{Kelbert:2015:ACNS,
year={2015},
isbn={978-3-319-28165-0},
booktitle={Applied Cryptography and Network Security},
volume={9092},
series={Lecture Notes in Computer Science},
editor={Malkin, Tal and Kolesnikov, Vladimir and Lewko, Allison Bishop and Polychronakis, Michalis},
doi={10.1007/978-3-319-28166-7_20},
title={{A Fully Decentralized Data Usage Control Enforcement Infrastructure}},
url={\url{http://dx.doi.org/10.1007/978-3-319-28166-7_20}},
publisher={Springer International Publishing},
author={Kelbert, Florian and Pretschner, Alexander},
pages={409-430},
language={English}
}
year={2015},
isbn={978-3-319-28165-0},
booktitle={Applied Cryptography and Network Security},
volume={9092},
series={Lecture Notes in Computer Science},
editor={Malkin, Tal and Kolesnikov, Vladimir and Lewko, Allison Bishop and Polychronakis, Michalis},
doi={10.1007/978-3-319-28166-7_20},
title={{A Fully Decentralized Data Usage Control Enforcement Infrastructure}},
url={\url{http://dx.doi.org/10.1007/978-3-319-28166-7_20}},
publisher={Springer International Publishing},
author={Kelbert, Florian and Pretschner, Alexander},
pages={409-430},
language={English}
}
Data usage control provides mechanisms for data owners to remain in control over how their data is used after it is has been shared. Many data usage policies can only be enforced on a global scale, as they refer to data usage events happening within multiple distributed systems: ‘not more than three employees may ever read this document’, or ‘no copy of this document may be modified after it has been archived’. While such global policies can be enforced by a centralized enforcement infrastructure that observes all data usage events in all relevant systems, such a strategy involves heavy communication. We show how the overall coordination overhead can be reduced by deploying a decentralized enforcement infrastructure. Our contributions are: (i) a formal distributed data usage control system model; (ii) formal methods for identifying all systems relevant for evaluating a given policy; (iii) identification of situations in which no coordination between systems is necessary without compromising policy enforcement; (iv) proofs of correctness of (ii, iii).
@incollection{Kelbert:2014:CANS,
year={2014},
isbn={978-3-319-12279-3},
booktitle={Cryptology and Network Security},
volume={8813},
series={Lecture Notes in Computer Science},
editor={Gritzalis, Dimitris and Kiayias, Aggelos and Askoxylakis, Ioannis},
doi={10.1007/978-3-319-12280-9_23},
title={{Decentralized Distributed Data Usage Control}},
url={\url{http://dx.doi.org/10.1007/978-3-319-12280-9_23}},
publisher={Springer International Publishing},
author={Kelbert, Florian and Pretschner, Alexander},
pages={353--369},
language={English}
}
year={2014},
isbn={978-3-319-12279-3},
booktitle={Cryptology and Network Security},
volume={8813},
series={Lecture Notes in Computer Science},
editor={Gritzalis, Dimitris and Kiayias, Aggelos and Askoxylakis, Ioannis},
doi={10.1007/978-3-319-12280-9_23},
title={{Decentralized Distributed Data Usage Control}},
url={\url{http://dx.doi.org/10.1007/978-3-319-12280-9_23}},
publisher={Springer International Publishing},
author={Kelbert, Florian and Pretschner, Alexander},
pages={353--369},
language={English}
}
Usage control (UC) is concerned with how data may or may not be used after initial access has been granted. UC requirements are expressed in terms of data (e.g. a picture, a song) which exist within a system in forms of different technical representations (containers, e.g. files, memory locations, windows). A model combining UC enforcement with data flow tracking across containers has been proposed in the literature, but it exhibits a high false positives detection rate. In this paper we propose a refined approach for data flow tracking that mitigates this overapproximation problem by leveraging information about the inherent structure of the data being tracked. We propose a formal model and show some exemplary instantiations.
@inproceedings{Lovat2014Structure,
author={Lovat, Enrico and Kelbert, Florian},
booktitle={IEEE Security and Privacy Workshops (SPW)},
title={{Structure Matters - A New Approach for Data Flow Tracking}},
year=2014,
month=may,
pages={39--43},
keywords={data flow tracking, data structure, usage control},
doi={10.1109/SPW.2014.15}
}
author={Lovat, Enrico and Kelbert, Florian},
booktitle={IEEE Security and Privacy Workshops (SPW)},
title={{Structure Matters - A New Approach for Data Flow Tracking}},
year=2014,
month=may,
pages={39--43},
keywords={data flow tracking, data structure, usage control},
doi={10.1109/SPW.2014.15}
}
Despite the increasing adoption of cloud-based services, concerns regarding the proper future usage and storage of data given to such services remain: Once sensitive data has been released to a cloud service, users often do not know which other organizations or services get access and may store, use or redistribute their data. The research field of usage control tackles such problems by enforcing requirements on the usage of data after it has been given away and is thus particularly important in the cloud ecosystem. So far, research has mainly focused on enforcing such requirements within single systems. This PhD thesis investigates the distributed aspects of usage control, with the goal to enforce usage control requirements on data that flows between systems, services and applications that may be distributed logically, physically and organizationally. To this end, this thesis contributes by tackling four related subproblems: (1) tracking data flows across systems and propagating corresponding data usage policies, (2) taking distributed policy decisions, (3) investigating adaptivity of today's systems and services, and (4) providing appropriate guarantees. The conceptual results of this PhD thesis will be implemented and instantiated to cloud services, thus contributing to their trustworthiness and acceptance by providing security guarantees for the future usage of sensitive data. The results will be evaluated w.r.t. provided security guarantees, practicability, usability, and performance.
@inproceedings{Kelbert:2013:DUCC:CCGrid.2013.35,
author = {Kelbert, Florian},
title = {{Data Usage Control for the Cloud}},
booktitle = {Proceedings of the 13th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing},
series = {CCGrid '13},
year = {2013},
isbn = {978-1-4673-6465-2},
location = {Delft, The Netherlands},
pages = {156--159},
numpages = {4},
url = {\url{http://dx.doi.org/10.1109/CCGrid.2013.35}},
doi = {10.1109/CCGrid.2013.35},
publisher = {IEEE}
}
author = {Kelbert, Florian},
title = {{Data Usage Control for the Cloud}},
booktitle = {Proceedings of the 13th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing},
series = {CCGrid '13},
year = {2013},
isbn = {978-1-4673-6465-2},
location = {Delft, The Netherlands},
pages = {156--159},
numpages = {4},
url = {\url{http://dx.doi.org/10.1109/CCGrid.2013.35}},
doi = {10.1109/CCGrid.2013.35},
publisher = {IEEE}
}
Mit seinen digitalen Marktplätzen, Suchmaschinen, sozialen Netzwerken und vielen anderen Diensten kann das Internet zur Verwirklichung grundlegender europäischer Werte beitragen: freie Selbstbestimmung, politische Partizipation und wirtschaftliches Wohlergehen der Menschen. Allerdings bezahlen Nutzerinnen und Nutzer Internet-Dienste häufig mit Daten statt mit Geld, wodurch ihre Privatheit infrage gestellt wird. Angesichts dieser Spannung zeigt acatech, wie eine Internet-Kultur entwickelt werden kann, die es erlaubt, die Chancen des Internets wahrzunehmen und dabei die Privatheit der Menschen schützt. Diese achatech Position enthält konkrete Handlungsempfehlungen, wie Bildung, Wirtschaft, Recht und Technik zu einer solchen Kultur beitragen können.
@book{acatech:InternetPrivacy:2013.Chancen,
editor = {{acatech (Ed.)}},
title = {{Privatheit im Internet. Chancen wahrnehmen, Risiken einschätzen, Vertrauen gestalten}},
year = {2013},
month = may,
isbn = {978-3-642-37979-6},
numpages = {36},
doi = {10.1007/978-3-642-37980-2},
series = {acatech POSITION PAPER},
publisher = {Springer Vieweg}
}
editor = {{acatech (Ed.)}},
title = {{Privatheit im Internet. Chancen wahrnehmen, Risiken einschätzen, Vertrauen gestalten}},
year = {2013},
month = may,
isbn = {978-3-642-37979-6},
numpages = {36},
doi = {10.1007/978-3-642-37980-2},
series = {acatech POSITION PAPER},
publisher = {Springer Vieweg}
}
With its online marketplaces, search engines, social networks and multitude of other services, the Internet can contribute to upholding the basic European values of free self-determination, political participation and economic wellbeing for all citizens. However, the fact that users of online services often pay for them with their personal data instead of money can pose a threat to their privacy. acatech shows how the resulting tensions can be addressed by developing an Internet culture where it is possible to protect people’s privacy while still making the most of the opportunities offered by the Internet. This acatech POSITION PAPER outlines concrete recommendations for how education, business, regulation and technology can contribute to building this culture.
@book{acatech:InternetPrivacy:2013.Opportunities,
editor = {{acatech (Ed.)}}
title = {{Internet Privacy. Taking opportunities, assessing risks, building trust}},
year = {2013},
month = may,
numpages = {34},
doi = {10.1007/978-3-642-37980-2},
series = {acatech POSITION PAPER},
publisher = {Springer Vieweg}
}
editor = {{acatech (Ed.)}}
title = {{Internet Privacy. Taking opportunities, assessing risks, building trust}},
year = {2013},
month = may,
numpages = {34},
doi = {10.1007/978-3-642-37980-2},
series = {acatech POSITION PAPER},
publisher = {Springer Vieweg}
}
A thorough multidisciplinary analysis of various perspectives on internet privacy was published as the first volume of a study, revealing the results of the acatech project “Internet Privacy – A Culture of Privacy and Trust on the Internet.” The second publication from this project presents integrated, interdisciplinary options for improving privacy on the Internet utilising a normative, value-oriented approach. The ways in which privacy promotes and preconditions fundamental societal values and how privacy violations endanger the flourishing of said values are exemplified. The conditions which must be fulfilled in order to achieve a culture of privacy and trust on the Internet are illuminated. This volume presents options for policy-makers, educators, businesses and technology experts how to facilitate solutions for more privacy on the internet and identifies further research requirements in this area.
@book{acatech:InternetPrivacy:2013.Options,
editor = {{acatech (Ed.)}}
title = {{Internet Privacy. Options for adequate realisation}},
year = {2013},
month = may,
isbn = {978-3-642-37912-3},
numpages = {112},
doi = {10.1007/978-3-642-37913-0},
publisher = {Springer Vieweg}
}
editor = {{acatech (Ed.)}}
title = {{Internet Privacy. Options for adequate realisation}},
year = {2013},
month = may,
isbn = {978-3-642-37912-3},
numpages = {112},
doi = {10.1007/978-3-642-37913-0},
publisher = {Springer Vieweg}
}
Distributed usage control is concerned with how data may or may not be
used in distributed system environments after initial access has been granted. If data flows through
a distributed system, there exist multiple copies of the data on different client machines. Usage
constraints then have to be enforced for all these clients. We extend a generic model for intra-system
data flow tracking—that has been designed and used to track the existence of copies of data on single
clients—to the cross-system case. When transferring, i.e., copying, data from one machine to another,
our model makes it possible to (1) transfer usage control policies along with the data to the end of
local enforcement at the receiving end, and (2) to be aware of the existence of copies of the data in
the distributed system. As one example, we concretize “transfer of data” to the Transmission Control
Protocol (TCP). Based on this concretized model, we develop a distributed usage control enforcement
infrastructure that generically and application-independently extends the scope of usage control
enforcement to any system receiving usage-controlled data. We instantiate and implement our work for
OpenBSD and evaluate its security and performance.
@inproceedings{Kelbert:2013:DUC:2435349.2435358,
author = {Kelbert, Florian and Pretschner, Alexander},
title = {{Data Usage Control Enforcement in Distributed Systems}},
booktitle = {Proceedings of the Third ACM Conference on Data and Application Security and Privacy},
series = {CODASPY '13},
year = {2013},
isbn = {978-1-4503-1890-7},
location = {San Antonio, Texas, USA},
pages = {71--82},
numpages = {12},
url = {\url{http://doi.acm.org/10.1145/2435349.2435358}},
doi = {10.1145/2435349.2435358},
acmid = {2435358},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {data flow tracking, distributed usage control, policy enforcement, security and privacy, sticky policies},
}
author = {Kelbert, Florian and Pretschner, Alexander},
title = {{Data Usage Control Enforcement in Distributed Systems}},
booktitle = {Proceedings of the Third ACM Conference on Data and Application Security and Privacy},
series = {CODASPY '13},
year = {2013},
isbn = {978-1-4503-1890-7},
location = {San Antonio, Texas, USA},
pages = {71--82},
numpages = {12},
url = {\url{http://doi.acm.org/10.1145/2435349.2435358}},
doi = {10.1145/2435349.2435358},
acmid = {2435358},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {data flow tracking, distributed usage control, policy enforcement, security and privacy, sticky policies},
}
Today’s electricity grid is evolving into the smart grid which ought to be reliable, flexible, efficient, and sustainable. To fulfill these requirements, the smart grid draws on a plenty of core technologies. Advanced Metering Infrastructure (AMI). These technologies facilitate easy and fast accumulation of different data, e.g. fine-grained meter readings. Various security and privacy concerns w.r.t. the gathered data arise, since research has shown that it is possible to deduce and extract user behaviour from smart meter readings. Hence, these meter readings are very sensitive and require appropriate protection.
Unlike other data protection approaches that are primarily based on data obfuscation and data encryption, we introduce a usage control based data protection mechanism for the smart grid. We show how the concept of distributed data usage control can be integrated with smart grid services and concretize this approach for an energy marketplace that runs on a cloud platform for performance, scalability, and economic reasons.
Unlike other data protection approaches that are primarily based on data obfuscation and data encryption, we introduce a usage control based data protection mechanism for the smart grid. We show how the concept of distributed data usage control can be integrated with smart grid services and concretize this approach for an energy marketplace that runs on a cloud platform for performance, scalability, and economic reasons.
@incollection{Fromm2012DataProtectioninaCloudEnabledSmartGrid,
author={Fromm, Alexander and Kelbert, Florian and Pretschner, Alexander},
title={{Data Protection in a Cloud-Enabled Smart Grid}},
booktitle={Smart Grid Security},
pages={96--107},
year={2013},
isbn={978-3-642-38029-7},
volume={7823},
series={Lecture Notes in Computer Science},
editor={Cuellar, Jorge},
doi={10.1007/978-3-642-38030-3_7},
url={\url{http://dx.doi.org/10.1007/978-3-642-38030-3_7}},
publisher={Springer Berlin Heidelberg}
}
author={Fromm, Alexander and Kelbert, Florian and Pretschner, Alexander},
title={{Data Protection in a Cloud-Enabled Smart Grid}},
booktitle={Smart Grid Security},
pages={96--107},
year={2013},
isbn={978-3-642-38029-7},
volume={7823},
series={Lecture Notes in Computer Science},
editor={Cuellar, Jorge},
doi={10.1007/978-3-642-38030-3_7},
url={\url{http://dx.doi.org/10.1007/978-3-642-38030-3_7}},
publisher={Springer Berlin Heidelberg}
}
Recent years have seen an unprecedented growth of Internet-based applications
and offerings that have a huge impact on individuals’ daily lives and organisations’ (businesses and
governments) practices. These applications are bound to bring large-scale data collection,
long-term storage, and systematic sharing of data across various data controllers i.e., individuals,
partner organizations, and scientists. This creates new privacy issues. For instance, emerging
Internet-based applications and the underlying technologies provide new ways to track and profile
individual users across multiple Internet domains, often without their knowledge or consent. In this section,
we present the current state of privacy on the Internet. The section proposes a review and analysis of current
threats to individual privacy on the Internet as well as existing countermeasures. Our analysis considers
five emerging Internet-based applications, namely personalized web and E-commerce services, online social
networks, cloud computing applications, cyber-physical systems, and Big data. It outlines privacy-threatening
techniques, with a focus on those applications. We conclude with a discussion on technologies that could
help address different types of privacy threats and thus support privacy on the Web.
@incollection{acatech:InternetPrivacy:2012.Bestandsaufnahme,
year={2012},
month=sep,
isbn={978-3-642-31942-6},
booktitle={Internet Privacy. Eine multidisziplin{\"a}re Bestandsaufnahme/ A multidisciplinary analysis},
series={acatech Studie},
editor={Buchmann, Johannes},
doi={10.1007/978-3-642-31943-3_4},
title={State of Online Privacy: A Technical Perspective},
url={\url{http://dx.doi.org/10.1007/978-3-642-31943-3_4}},
publisher={Springer Berlin Heidelberg},
author={Kelbert, Florian and Shirazi, Fatemeh and Simo, Hervais and W\"{u}chner, Tobias and Buchmann, Johannes and Pretschner, Alexander and Waidner, Michael},
pages={189--279}
}
year={2012},
month=sep,
isbn={978-3-642-31942-6},
booktitle={Internet Privacy. Eine multidisziplin{\"a}re Bestandsaufnahme/ A multidisciplinary analysis},
series={acatech Studie},
editor={Buchmann, Johannes},
doi={10.1007/978-3-642-31943-3_4},
title={State of Online Privacy: A Technical Perspective},
url={\url{http://dx.doi.org/10.1007/978-3-642-31943-3_4}},
publisher={Springer Berlin Heidelberg},
author={Kelbert, Florian and Shirazi, Fatemeh and Simo, Hervais and W\"{u}chner, Tobias and Buchmann, Johannes and Pretschner, Alexander and Waidner, Michael},
pages={189--279}
}
Distributed usage control is concerned with how data may or may not be used
after initial access to it has been granted and is therefore particularly important in
distributed system environments. We present an application- and application-protocol-independent
infrastructure that allows for the enforcement of usage control policies in a distributed environment.
We instantiate the infrastructure for transferring files using FTP and for a scenario where smart
meters are connected to a Facebook application.
@inproceedings{Kelbert:2012:TPE:2295136.2295159,
author = {Kelbert, Florian and Pretschner, Alexander},
title = {{Towards a Policy Enforcement Infrastructure for Distributed Usage Control}},
booktitle = {Proceedings of the 17th ACM Symposium on Access Control Models and Technologies},
series = {SACMAT '12},
year = {2012},
month = jun,
isbn = {978-1-4503-1295-0},
location = {Newark, New Jersey, USA},
pages = {119--122},
numpages = {4},
url = {\url{http://doi.acm.org/10.1145/2295136.2295159}},
doi = {10.1145/2295136.2295159},
acmid = {2295159},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {distributed usage control, policy enforcement, security and privacy, sticky policies}
}
author = {Kelbert, Florian and Pretschner, Alexander},
title = {{Towards a Policy Enforcement Infrastructure for Distributed Usage Control}},
booktitle = {Proceedings of the 17th ACM Symposium on Access Control Models and Technologies},
series = {SACMAT '12},
year = {2012},
month = jun,
isbn = {978-1-4503-1295-0},
location = {Newark, New Jersey, USA},
pages = {119--122},
numpages = {4},
url = {\url{http://doi.acm.org/10.1145/2295136.2295159}},
doi = {10.1145/2295136.2295159},
acmid = {2295159},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {distributed usage control, policy enforcement, security and privacy, sticky policies}
}
Usage control is concerned with how data is used after access has been
granted. Enforcement mechanisms have been implemented for distributed systems
like web based social networks (WBSN) at various levels of abstraction. We extend
data usage control to heterogeneous distributed systems by implementing a policy enforcement
mechanism for a smart meter connected to a WBSN. The idea is to provide
users an opportunity to share their energy usage and other related data within their
social group while maintaining control over further usage of that data. The implementation
borrows from an existing usage control framework for a common web browser.
@inproceedings{Kumari2012DataProtectionHeterogeneousDistributedSystems,
author = {Kumari, P. and Kelbert, F. and Pretschner, A.},
title = {{Data Protection in Heterogeneous Distributed Systems: A Smart Meter Example}},
booktitle = {Proc. Workshop on Dependable Software for Critical Infrastructures. GI Lecture Notes in Informatics},
month = oct,
year = {2011}
}
author = {Kumari, P. and Kelbert, F. and Pretschner, A.},
title = {{Data Protection in Heterogeneous Distributed Systems: A Smart Meter Example}},
booktitle = {Proc. Workshop on Dependable Software for Critical Infrastructures. GI Lecture Notes in Informatics},
month = oct,
year = {2011}
}
Workflow-Management-Systeme (WfMS) werden zunehmend zur Modellierung und Ausführung umfangreicher Geschäftsprozesse eingesetzt. Da die Ausführung der Prozesse
durch viele unterschiedliche Benutzer erfolgt, entsteht die Notwendigkeit komplexer Sicherheitsanforderungen, auch Authorization Constraints genannt.
In heutigen WfMS werden an Benutzer Berechtigungen vergeben, die zur Ausführung einzelner Aktivitäten eines Prozesses berechtigen. Authorization Constraints ermöglichen dagegen den Einsatz komplexerer Berechtigungen. Bekannte Beispiele sind Separation of Duty (SoD) und Binding of Duty (BoD). Während SoD die Ausführung zweier oder mehrerer Aktivitäten durch unterschiedliche Benutzer fordert, müssen diese bei BoD durch denselben Benutzer ausgeführt werden.
Diese Arbeit beschäftigt sich mit dem Einsatz von Authorization Constraints in WfMS und geht dabei insbesondere auf deren Durchsetzung und Validierung ein. Ziel der Durchsetzung ist zu gewährleisten, dass alle Authorization Constraints zur Modellierzeit und zur Laufzeit des Prozesses eingehalten werden. Die Validierung hat zum Ziel, Widersprüche zwischen Authorization Constraints zu finden. Außerdem beschäftigt sich die Validierung mit der Frage, ob die Ausführung aller Aktivitäten eines Prozesses, bei gleichzeitiger Durchsetzung aller Authorization Constraints, möglich ist.
Zunächst werden hierzu die in WfMS relevanten Authorization Constraints vorgestellt und kategorisiert. Um die Durchsetzung und Validierung der unterschiedlichen Authorization Constraints auf eine einheitliche Weise zu ermöglichen, wird ein Modell namens MOnK eingeführt. MOnK ermöglicht die einheitliche Modellierung der unterschiedlichen Authorization Constraints. Die Durchsetzung und Validierung wird dann auf Basis sogenannter MOnK-Constraints vorgenommen. Dabei ist zu beachten, dass unterschiedliche Arten von Authorization Constraints existieren, deren Durchsetzung und Validierung teilweise zur Modellierzeit, teilweise aber auch erst zur Laufzeit des Prozesses möglich ist. Daher werden in dieser Arbeit Vorgehensweisen vorgestellt, um MOnK-Constraints sowohl zur Modellierzeit, als auch zur Laufzeit eines Prozesses durchsetzen und validieren zu können.
Exemplarisch wurden die Modellierung der Authorization Constraints durch MOnK-Constraints, sowie die zur Modellierzeit durchführbaren Teile der Durchsetzung und Validierung implementiert.
In heutigen WfMS werden an Benutzer Berechtigungen vergeben, die zur Ausführung einzelner Aktivitäten eines Prozesses berechtigen. Authorization Constraints ermöglichen dagegen den Einsatz komplexerer Berechtigungen. Bekannte Beispiele sind Separation of Duty (SoD) und Binding of Duty (BoD). Während SoD die Ausführung zweier oder mehrerer Aktivitäten durch unterschiedliche Benutzer fordert, müssen diese bei BoD durch denselben Benutzer ausgeführt werden.
Diese Arbeit beschäftigt sich mit dem Einsatz von Authorization Constraints in WfMS und geht dabei insbesondere auf deren Durchsetzung und Validierung ein. Ziel der Durchsetzung ist zu gewährleisten, dass alle Authorization Constraints zur Modellierzeit und zur Laufzeit des Prozesses eingehalten werden. Die Validierung hat zum Ziel, Widersprüche zwischen Authorization Constraints zu finden. Außerdem beschäftigt sich die Validierung mit der Frage, ob die Ausführung aller Aktivitäten eines Prozesses, bei gleichzeitiger Durchsetzung aller Authorization Constraints, möglich ist.
Zunächst werden hierzu die in WfMS relevanten Authorization Constraints vorgestellt und kategorisiert. Um die Durchsetzung und Validierung der unterschiedlichen Authorization Constraints auf eine einheitliche Weise zu ermöglichen, wird ein Modell namens MOnK eingeführt. MOnK ermöglicht die einheitliche Modellierung der unterschiedlichen Authorization Constraints. Die Durchsetzung und Validierung wird dann auf Basis sogenannter MOnK-Constraints vorgenommen. Dabei ist zu beachten, dass unterschiedliche Arten von Authorization Constraints existieren, deren Durchsetzung und Validierung teilweise zur Modellierzeit, teilweise aber auch erst zur Laufzeit des Prozesses möglich ist. Daher werden in dieser Arbeit Vorgehensweisen vorgestellt, um MOnK-Constraints sowohl zur Modellierzeit, als auch zur Laufzeit eines Prozesses durchsetzen und validieren zu können.
Exemplarisch wurden die Modellierung der Authorization Constraints durch MOnK-Constraints, sowie die zur Modellierzeit durchführbaren Teile der Durchsetzung und Validierung implementiert.
@Thesis{Kelbert2010Authorization,
author = {Florian Kelbert},
title = {{Authorization Constraints in Workflow Management Systemen}},
school = {Ulm University, Germany},
year = 2010,
month = jun
}
author = {Florian Kelbert},
title = {{Authorization Constraints in Workflow Management Systemen}},
school = {Ulm University, Germany},
year = 2010,
month = jun
}
Teaching
PhD Mentor
- Mohsen Ahmadvand: Software Integrity Protection, Technical University of Munich, Germany, In progress since February 2017.
Supervised Theses
- Automatic Generation of Secure and Usable Mnemonic Passphrases, Master's Thesis, Technical University of Munich, Germany, 136 pages, May 2016.
- Securing Data Usage Control Infrastructures, Master's Thesis, Technical University of Munich, Germany, 161 pages, August 2015.
- Monitoring Compliance of Third-Party Applications in Online Social Networks, Bachelor's Thesis, Technical University of Munich, Germany, 49 pages, October 2014.
- A Comprehensive Usage Control System for Distributed Usage Control, Master's Thesis, University of Kaiserslautern, Germany, 129 pages, September 2011.
Exercises
- Foundations of Program and System Development (WS15, WS14)
- Programming (WS11, SS11, WS10)
Labs
- Security Engineering (WS13, WS12, SS11)
Seminars
- Human-Centered Security (WS15)
- Evaluating Security Approaches (SS14)
- Secure IT Systems (WS11)
Tutorials
- Introduction to Software Engineering (SS15, SS14)
- Foundations of Programming (WS13)
Ongoing projects
SecureCloud
Secure Big Data Processing in Untrusted Clouds.Duration: | 01/2016 - 12/2018 |
Web: | securecloudproject.eu |
Funding: | European Commission, Horizon 2020 Programme |
SERECA
Secure Enclaves For Reactive Cloud Applications.Duration: | 03/2015 - 02/2018 |
Web: | serecaproject.eu |
Funding: | European Commission, Horizon 2020 Programme |
Past projects
Munich Center for Internet Research
The MCIR studies the socio-cultural implications of digitization.Duration: | 12/2015 - 11/2016 |
Web: | securecloudproject.eu |
Funding: | Bavarian State Ministry for Education, Science and the Arts |
Internet Privacy
A Culture of Privacy and Trust for the Internet.Duration: | 08/2011 - 05/2013 |
Web: | mcir.digital |
Funding: | German Federal Ministry of Education and Research (BMBF) |
Peer Energy Cloud
Cloud Enabled Smart Energy Micro Grids.Duration: | 09/2011 - 08/2014 |
Web: | peerenergycloud.de |
Funding: | German Federal Ministry of Economics and Technology (BMWi) |
Awards and Service
Awards
- Best Poster Award, 13th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid), May 2013, Delft, The Netherlands.
Service
- Journal Reviewing:
- IEEE Transactions on Dependable and Secure Computing (TDSC)
- PC Member:
- Reviewer:
Invited Talks
- Verteilte Daten-Nutzungskontrolle: Potenziale und Herausforderungen. In CAST-Workshop "Technischer Datenschutz - Trends und Herausforderungen", CAST e.V., April 2013, Darmstadt, Germany.