GameSec 2019

Conference on Decision and Game Theory for Security

October 30 - November 1, 2019, Stockholm, Sweden

2019 Conference on Decision and Game Theory for Security

GameSec 2019, the 10th Conference on Decision and Game Theory for Security, will take place in Stockholm, Sweden, on October 30 - November 1, 2019.

The conference proceedings will be published by Springer as part of the LNCS series.

Description

As we close the second decade of the 21st century, modern societies are becoming dependent on information, automation, and communication technologies more than ever. Managing the security of the emerging systems, many of them safety critical, poses significant challenges. The 10th Conference on Decision and Game Theory for Security focuses on protection of heterogeneous, large-scale and dynamic cyber-physical systems as well as managing security risks faced by critical infrastructures through rigorous and practically-relevant analytical methods. GameSec 2019 invites novel, high-quality theoretical and practically-relevant contributions, which apply decision and game theory, as well as related techniques such as optimization, machine learning, dynamic control and mechanism design, to build resilient, secure, and dependable networked systems. The goal of GameSec 2019 is to bring together academic and industrial researchers in an effort to identify and discuss the major technical challenges and recent results that highlight the connections between game theory, control, distributed optimization, machine learning, economic incentives and real-world security, reputation, trust and privacy problems.

Conference Topics include (but are not restricted to):

GameSec solicits research papers, which report original results and have neither been published nor submitted for publication elsewhere, on the following and other closely related topics:

  • Game theory, control, and mechanism design for security and privacy
  • Decision making for cybersecurity and security requirements engineering
  • Security and privacy for the Internet-of-Things, cyber-physical systems, cloud computing, resilient control systems, and critical infrastructure
  • Pricing, economic incentives, security investments, and cyber insurance for dependable and secure systems
  • Risk assessment and security risk management
  • Security and privacy of wireless and mobile communications, including user location privacy
  • Socio-technological and behavioral approaches to security
  • Empirical and experimental studies with game, control, or optimization theory-based analysis for security and privacy
  • Adversarial Machine Learning and the role of AI in system security

Special Sessions on "Adversarial AI" and "Cyber-Physical System Security"

The conference will have special sessions focusing on timely and exciting research topics “Adversarial AI” and “Cyber-physical system security”. Researchers, who wish to present novel results on these topics are encouraged to consider these sessions.

Paper Submission

Authors should consult Springer’s authors’ guidelines and use their proceedings templates, either for LaTeX or for Word, for the preparation of their papers. Springer encourages authors to include their ORCIDs in their papers. In addition, the corresponding author of each paper, acting on behalf of all of the authors of that paper, must complete and sign a Consent-to-Publish form. The corresponding author signing the copyright form should match the corresponding author marked on the paper. Once the files have been sent to Springer, changes relating to the authorship of the papers cannot be made.

Tutorial Session on "Adversarial Machine Learning"

The tutorial session will be given by Murat Kantarcioglu (University of Texas at Dallas)

The tutorial slides can be found here.

Keynote Speakers

Day 1

Photo: Prof. Mingyan Liu
Prof.
Mingyan Liu

Bio : Mingyan Liu received her Ph.D in electrical engineering from the University of Maryland, College Park, in 2000. She has since been with the Department of Electrical Engineering and Computer Science at the University of Michigan, Ann Arbor, where she is currently a Professor and the Peter and Evelyn Fuss Chair of Electrical and Computer Engineering.
Her research interests are in optimal resource allocation, performance modeling, sequential decision and learning theory, game theory and incentive mechanisms, with applications to large-scale networked systems, cybersecurity and cyber risk quantification. She is the recipient of the 2002 NSF CAREER Award, the University of Michigan Elizabeth C. Crosby Research Award in 2003 and 2014, the 2010 EECS Department Outstanding Achievement Award, the 2015 College of Engineering Excellence in Education Award, the 2017 College of Engineering Excellence in Service Award, and the 2018 Distinguished University Innovator Award. She has received a number of Best Paper Awards, including at the IEEE/ACM International Conference on Information Processing in Sensor Networks (IPSN) in 2012 and at the IEEE/ACM International Conference on Data Science and Advanced Analytics (DSAA) in 2014. She has served on the editorial boards of IEEE/ACM Trans. Networking, IEEE Trans. Mobile Computing, and ACM Trans. Sensor Networks. She is a Fellow of the IEEE and a member of the ACM.

Title : From Risk Transfer to Risk Mitigation in Contract Design: Cyber Insurance as an Incentive Mechanism for Cybersecurity
Slides : PDF
Abstract : With increasingly frequent and evermore costly data breaches and other cyber incidents, firms are turning to cyber insurance as a risk management instrument. However, much like other types of insurance, cyber insurance is fundamentally a method of risk transfer. With typical issues of moral hazard and information asymmetry, the insured is generally inclined to lower its effort within a contract, leading to a worse state of security. To use cyber insurance as an incentive mechanism to encourage better security practices and higher security investment, a commonly used concept is premium discrimination, i.e., an insured pays less premium for exerting higher effort. However, using premium discrimination effectively faces two challenges: (1) one needs to be able to accurately assess the effort exerted by the insured, and (2) cyber risks are notoriously interdependent at a firm level: an insured's risk is a function of not only its own effort, but also the efforts of its vendors and suppliers. This externality makes the underlying contract design problem quite different from what's typically studied in the literature.
With these two challenges in mind, I will first present our research in quantitative assessment of an organization's cyber risk from externally observable properties, by applying modern machine learning techniques to large quantities of Internet measurement data. This firm-level security posture assessment, or "pre-screening" makes premium discrimination feasible. I then consider a contract design problem with a single profit-maximizing, risk-neutral insurer (principal) and voluntarily participating, risk-averse insureds (agents). We show that risk dependency among agents leads to a "profit opportunity" for the insurer, created by the inefficient effort levels exerted by agents who do not account for risk externalities when outside of a contract. Pre-screening then allows the insurer to take advantage of this opportunity by designing appropriate contract terms which incentivize agents to internalize the externalities. We identify conditions under which this type of contracts lead to not only increased profit for the principal, but also an improved state of network security. This result further allows us to investigate and compare typical policy portfolios and show how cyber risk dependencies can be taken into account when underwriting policies. This is demonstrated using a commonly practiced rate-schedule based policy framework.

Day 2

Photo: Dr. Reza Ghanadan
Dr.
Reza Ghanadan

Bio : Dr. Reza Ghanadan is a senior manager at Google, where he leads the Cloud Artificial Intelligence research and strategic technology programs. His research interest is investigating methods for creating robust and reliable AI systems for real-world applications. Prior to joining Google, he held several executive management and technical leadership positions in high-tech research organizations with a focus on intelligent systems, including program manager at DARPA, chief network engineer and technical fellow at Boeing Research & Technology, engineering fellow and technical director at BAE Systems, member of technical staff and group leader at AT&T Bell Laboratories, and founding team member of Flarion Technologies (later acquired by Qualcomm). Reza received a Ph.D. in Electrical Engineering from UMD College Park and holds an Executive MBA from NYU, M.S. in Electrical Engineering, and two B.S. degrees in Electrical Engineering and in Physics (both Summa Cum Laude). He has been awarded 18 patents and has 30+ peer-reviewed publications.
Dr. Ghanadan has been involved in a number of high profile R&D programs where his research has been featured in several media (e.g. the Washington Post, NBC News, Science, Analytics India). He has led the creation of several transformational methods and products successfully demonstrating foundational applications of AI and ML in a range of complex domains including autonomy, perception, healthcare, social-cognitive systems, life sciences and social sciences. He has been invited speaker at a number of conferences, workshops and industry panels on these topics. His robotics autonomy research program in 2015 ranked in the Top 10 most popular DARPA programs based on nearly 20 million website visits, “machines that learn tasks from watching YouTube videos”.
Dr. Ghanadan has directed research and development in the areas of intelligent systems, machine learning, data science, and applications of these in science, engineering, and a wide range of products. To that end he has formed several multidisciplinary teams of R&D scientists and engineers to advance the state of the art research in AI and to investigate its value in a range of industries and scientific fields. His work has led to the creation of a robust AI framework for characterizing the vulnerabilities in AI based systems and investigating methods to improve safety, security and reliability of these systems.
Reza was the recipient of the Boeing Technology Innovation Award for the design of an efficient platform optimized for large-scale distributed mobile applications. At BAE systems, he received BAE’s Gold Chairman’s Award for Innovation for successfully launching an adaptive mobile ad-hoc networking protocol optimized for real-time adaptation to traffic and high dynamic variations in network topology.

Title : Assuring AI for Real-World Decision Making: Robust AI Design.
Abstract : AI is a powerful technology that is getting increasingly more complex. It will influence all aspects of human society, from how we work, how we commute, how we learn, and even how we think. It is also creating exciting new opportunities for transformational applications to improve the lives of people around the world, from business to healthcare to education. With such a powerful technology, it brings responsibility for safety, security, and reliability.
For example, there is a shift underway in how software is being built for increasingly more real-world applications, moving from the traditional programming approach to machine-learning based inference codes. This change is opening up exciting new possibilities but it is also posing new challenges for reliable operation at production, including fairness, interpretability, and privacy.
As such, the development of AI and its applications is raising new questions about effective methods to better characterize and mitigate their vulnerabilities. In particular, what is the best way to develop methods to design more robust AI products that can take into account safety and security built into these systems. We highlight examples of these vulnerabilities and challenges. We also develop an analytical framework to better characterize the impact of these vulnerabilities across the AI stack, and introduce several approaches in improving the robustness of AI and machine learning systems for real-world applications.

Conference Sponsors and Supporters

We thank all our sponsors for their kind support.

GameSec 2019 Proceedings

GameSec 2019 proceedings are published by Springer as part of the LNCS series. During the conference, the proceedings will be available free of charge online.