GameSec 2023

Conference on Decision and Game Theory for Security

October 18-20, 2023, Avignon, France

2023 Conference on Decision and Game Theory for Security

The 14th Conference on Decision and Game Theory for Security (GameSec-23) will take place from October 18-20, 2023 in Avignon, France. With the rapid development of information, automation, and communication technology, the security of these emerging systems is more important now than ever. GameSec 2023 focuses on the protection of heterogeneous, large-scale, and dynamic cyber-physical systems as well as managing security risks faced by critical infrastructures through rigorous and practically relevant analytical methods. GameSec 2023 invites novel, high-quality theoretical and empirical contributions, which leverage decision theory and game theory to address security problems and related problems such as privacy, trust, or bias in emerging systems. The goal of the conference is to bring together academic, government, and industrial researchers in an effort to identify and discuss the major challenges and recent results that highlight the interdisciplinary connections between game theory, control, distributed optimization, adversarial reasoning, machine learning, mechanism design, behavioral analysis, risk assessments, and security, reputation, trust and privacy problems.

GameSec-23 is planned to be a physical event. Requests of remote attendance, e.g., due to visa issues or travel restrictions, may be accommodated if necessary. For details on up-to-date Covid regulations click here.

The Best Paper Award will be announced on Friday closing session.

NEW for GameSec 2023
-some rejected papers will have the opportunity to be presented at the conference as a poster after a rebuttal phase,
- possibility to attend the conference online but only for the opening/closing session and the plenary sessions at a lower price,
- Special Issue of Dynamic Games and Applications (DGAA Journal) dedicated to selected high quality articles from the conference.

Conference Topics include (but are not restricted to):

GameSec solicits research papers, which report original results and have neither been published nor submitted for publication elsewhere, on the following and other closely related topics:

  • Game theory, control, and mechanism design for security and privacy
  • Decision making and decision theory for cybersecurity and security requirements engineering
  • Security and privacy for the Internet-of-Things, cyber-physical systems, cloud computing, resilient control systems, and critical infrastructure
  • Pricing, economic incentives, security investments, and cyber insurance for dependable and secure systems
  • Risk assessment and security risk management
  • Security and privacy of wireless and mobile communications, including user location privacy
  • Socio-technological and behavioral approaches to security
  • Empirical and experimental studies with game, control, or optimization theory-based analysis for security and privacy
  • Behavioral science, decision making, heuristics, and biases
  • Modeling and analysis of deception for security within a game-theoretic framework
  • Adversarial or strategic machine learning and the role of AI in system security
  • Learning in games for security (new topic this year)
  • Game-theoretic or decision-theoretic analysis for the control of epidemics/virus propagation (new topic this year)
  • Decision and game theory for blockchain security (new topic this year)

Keynote Speakers

We are happy to announce the following keynote speakers:
Patricia Bouyer-Decitre
Patricia Bouyer-Decitre

The True Colors of Memory: A Tour of Chromatic-Memory Strategies in Zero-Sum Games on Graphs

Abstract:

Two-player turn-based zero-sum games on (finite or infinite) graphs are a central framework in theoretical computer science — notably as a tool for controller synthesis, but also due to their connection with logic and automata theory. A crucial challenge in the field is to understand how complex strategies need to be to play optimally, given a type of game and a winning objective. I will give a tour of recent advances aiming to characterize games where finite-memory strategies suffice (i.e., using a limited amount of information about the past). We mostly focus on so-called chromatic memory, which is limited to using colors — the basic building blocks of objectives — seen along a play to update itself. Chromatic memory has the advantage of being usable in different game graphs, and the corresponding class of strategies turns out to be of great interest to both the practical and the theoretical sides.

Bio:

Patricia Bouyer holds a PhD in Computer Science from ENS Cachan (2002). She has been a CNRS researcher from 2002 to 2020 at "Laboratoire Spécification et Vérification" (LSV, CNRS & ENS Cachan, France). She is now the head of the newly created "Laboratoire Méthodes Formelles" (LMF, Université Paris-Saclay, CNRS, ENS Paris-Saclay, France). She has held visiting positions at Aalborg University (Denmark) in 2002 and Oxford University (UK) in 2007. Patricia Bouyer's main research topics are model checking, game theory, and quantitative aspects of verification. She has been the principal investigator of ERC Starting Grant project EQualIS (2013-2019). She was the recipient of a Marie Curie fellowship in 2006, of the Bronze medal of CNRS in 2007 and of the Presburger Award given by the EATCS in 2011.


vorobeychik
Yevgeniy Vorobeychik

Towards Trustworthy Autonomous AI-Driven Systems

Abstract:

Autonomous systems are critical to the future of mobility. For decades, autonomy was restricted to perceptually simple problems, such as car or airplane autopilot. Then came the deep learning revolution that radically transformed perceptual reasoning ability. This enabling technology, in turn, promised to make autonomous driving a reality, with numerous other applications, including autonomous delivery, warehouse management, and others soon to follow. Yet, this has not happened. A major open problem is that real perceptual complexity involves a long tail of unforeseen circumstances, some actually adversarial, others simply sufficiently unusual, that fool the surprisingly fragile deep neural networks into predicting the wrong thing. This, in turn, can lead to devastating consequences, such as deadly crashes.

In this talk, I will discuss our work on robust learning and control in complex perceptual environments. First, I will describe our curriculum-based approach for robust reinforcement learning (RL) in adversarial settings. While recent work has made some progress in this regard, prior approaches nevertheless only tolerate very small amounts of adversarial noise in inputs. I will present a novel curriculum adversarial RL framework that takes advantage of the structure of adversarial input perturbations that creates a natural ordering of these in training in terms of learning difficulty. Because stability of learning is a major challenge in RL, a key part of our solution is training stabilization through limited retraining at the same difficulty level, anchoring future rounds in more robust initial policies. We show that our framework offers dramatic improvements in adversarial robustness compared to prior art. Next, I will shift focus on attaining provable adversarial robustness guarantees for learning and control. To this end, I will discuss a randomized smoothing approach that enables adversarial robustness certification of RL-based policies, and subsequently present an approach that is focused on provable safety guarantees in adversarial settings. Finally, I will discuss our recent work on learning provably stable policies in discrete-time nonlinear dynamical systems using the Lyapunov control function framework. First, I will present an approach that enables us to learn stable control in domains for which no existing computational alternative yields meaningful stability guarantees. I will then describe extensions of this approach that enable us to obtain provable stability for neural network dynamics even under adversarial perturbations to state estimates.

Bio:

Yevgeniy Vorobeychik is a Professor of Computer Science & Engineering at Washington University in Saint Louis. Previously, he was an Assistant Professor of Computer Science at Vanderbilt University. Between 2008 and 2010 he was a post-doctoral research associate at the University of Pennsylvania Computer and Information Science department. He received Ph.D. (2008) and M.S.E. (2004) degrees in Computer Science and Engineering from the University of Michigan, and a B.S. degree in Computer Engineering from Northwestern University. His work focuses on game theoretic modeling of security and privacy, adversarial machine learning, algorithmic and behavioral game theory and incentive design, optimization, agent-based modeling, complex systems, network science, and epidemic control. Dr. Vorobeychik received an NSF CAREER award in 2017, and was invited to give an IJCAI-16 early career spotlight talk. He also received several Best Paper awards, including one of 2017 Best Papers in Health Informatics. He was nominated for the 2008 ACM Doctoral Dissertation Award and received honorable mention for the 2008 IFAAMAS Distinguished Dissertation Award.

Conference Sponsors and Supporters

We invite you to participate in the sponsor program for GameSec-23. The conference will be held in person October 18-20, 2023. GameSec is an annual international conference that started in 2010 and it focuses on the protection of heterogeneous, large-scale, and dynamic cyber-physical systems as well as managing security risks faced by critical infrastructures through rigorous and practically relevant analytical methods, especially game-theoretic and decision-theoretic methods. The proceedings of the conference are published by Springer.

GameSec conference attracts 25-50 students, researchers, and practitioners every year from all around the world. Your participation in the GameSec sponsor program will give you visibility to this diverse group that has interest and expertise in security, privacy, game theory, decision theory, and more.

Sponsor benefits include:

  • Sponsor company name and logo will be displayed on website and at the venue
  • Opportunity for sponsored awards (best paper and best paper honorable mention)
  • Opportunity to provide named travel grant
  • Acknowledgment in opening talk and closing remarks
    • Avignon University
      AU
    • Springer (Best Paper Award)
      Springer

    Code of Conduct

    The GameSec community values Diversity, Equity, and Inclusion (DEI). GameSec’s Code of Conduct clearly outlines undesirable behaviors and subsequent corrective actions in detail.

    GameSec 2023 Proceedings

    GameSec 2023 proceedings will be published by Springer as part of the LNCS series.