GameSec 2025

Conference on Game Theory and AI for Security

October 13-15, 2025, Athens, Greece

Paper Important Dates

Submission

June 13, 2025
June 30, 2025
July 7, 2025 (final)

Decision Notification

July 25, 2025

Camera-ready

August 20, 2025

Author Registration Deadline

August 29, 2025

Keynote Speakers

We are happy to announce the following Keynote Speakers:


Marta Kwiatkowska, Professor

University of Oxford, England

Talk Title: Stochastic Games with Neural Perception Mechanisms: A Formal Methods Perspective

Marta KwiatkowskaMarta Kwiatkowska

Abstract: Strategic reasoning is necessary to ensure stable multi-agent coordination in complex environments, as has been demonstrated in fields such as economics and computer networks. As AI becomes embedded in computing infrastructure, there is a growing need for modelling methodologies to support the development of emerging applications in multi-robot planning or autonomous driving. Stochastic games are a well established model for multi-agent sequential decision making under uncertainty, which has been employed for strategy synthesis as well as formal verification. More recently, however, agents in these models perceive their environment using data-driven approaches such as neural networks trained on continuous data.
Show more...

Biography

Marta Kwiatkowska is a Professor at the University of Oxford and Fellow of Trinity College. Her area of expertise lies in probabilistic and quantitative verification techniques and the synthesis of correct-by-construction systems from quantitative specifications. She led the development of the probabilistic model checker PRISM, winner of the 2024 ETAPS Test-of-Time Tool Award, which has been used to model and verify numerous case studies across a variety of application domains. Recently, she has been focusing on safety and trust in artificial intelligence, with an emphasis on robustness guarantees for machine learning. Her research has been supported by two ERC Advanced Grants, VERIWARE and FUN2MODEL, EPSRC Programme Grant on Mobile Autonomy and EPSRC Prosperity Partnership FAIR. Kwiatkowska won the Royal Society Milner Award, the BCS Lovelace Medal and the Van Wijngaarden Award, and received an honorary doctorate from KTH Royal Institute of Technology in Stockholm. She is a Fellow of the Royal Society, Fellow of ACM, Member of Academia Europea and International Honorary Member of AAAS.




Milind Tambe, Professor

Harvard University and Google Deepmind, USA

Talk Title: Generative AI and Green Security Games for social impact: From conservation to public health

Milind TambeMilind Tambe

Abstract: For nearly two decades, my team's work on AI for Social Impact (AI4SI) has focused on optimizing limited resources in public health, conservation, and public safety. I will begin by highlighting our work on green security games, which adapts the Stackelberg security game framework to protect natural resources and combat environmental crime. We have used these models in national parks globally, and my talk will focus on our most recent efforts: using generative AI (specifically flow models) to build more accurate models of poacher behavior. We then combine these predictions with game theory to design strategic patrol plans. To address settings with limited data, I will also showcase use of composite flow matching models to aid transfer reinforcement learning. We apply a similar methodology of combining machine learning with resource optimization across our portfolio.
Show more...

Biography

Milind Tambe is Gordon McKay Professor of Computer Science at Harvard University; concurrently, he is also Principal Scientist at Google Deepmind. Prof. Tambe and his team have developed pioneering AI systems that deliver real-world impact in public health (e.g., maternal and child health), public safety, and wildlife conservation. He is recipient of the AAAI Award for Artificial Intelligence for the Benefit of Humanity, AAAI Feigenbaum Prize, IJCAI John McCarthy Award, AAAI Robert S. Engelmore Memorial Lecture Award, AAMAS ACM Autonomous Agents Research Award, INFORMS Wagner prize for excellence in Operations Research practice, Military Operations Research Society Rist Prize, Columbus Fellowship Foundation Homeland security award and commendations and certificates of appreciation from the US Coast Guard, the Federal Air Marshals Service and airport police at the city of Los Angeles. He is a fellow of AAAI and ACM.




Michael Jordan, Professor

Inria Paris, France and University of California, Berkeley, USA

Talk Title: A Collectivist, Economic Perspective on AI

Michael JordanMichael Jordan

Abstract: Information technology is in the midst of a revolution in which omnipresent data collection and machine learning are impacting the human world as never before. The word "intelligence" is being used as a North Star for the development of this technology, with human cognition viewed as a baseline. This view neglects the fact that humans are social animals, and that much of our intelligence is social and cultural in origin. Thus, a broader framing is to consider the system level, where the agents in the system, be they computers or humans, are active, they are cooperative, and they wish to obtain value from their participation in learning-based systems. Agents may supply data and other resources to the system only if it is in their interest to do so, and they may be honest and cooperative only if it is in their interest to do so. Critically, intelligence inheres as much in the overall system as it does in individual agents. This is a perspective that is familiar in economics, although without the focus on learning algorithms. A key challenge is thus to bring (micro)economic concepts into contact with foundational issues in the computing and statistical sciences. I'll discuss some concrete examples of problems and solutions at this tripartite interface.

Biography

Michael I. Jordan is a researcher at Inria Paris and Professor Emeritus at the University of California, Berkeley. His research interests bridge the computational, statistical, cognitive, biological and social sciences. Prof. Jordan is a member of the National Academy of Sciences, a member of the National Academy of Engineering, a member of the American Academy of Arts and Sciences, and a Foreign Member of the Royal Society. He was a winner of a BBVA Foundation Frontiers of Knowledge Award in 2025 and was the inaugural winner of the World Laureates Association (WLA) Prize in 2022. He was a Plenary Lecturer at the International Congress of Mathematicians in 2018. He has received the Ulf Grenander Prize from the American Mathematical Society, the IEEE John von Neumann Medal, the IJCAI Research Excellence Award, the David E. Rumelhart Prize, and the ACM/AAAI Allen Newell Award. In 2016, Prof. Jordan was named the "most influential computer scientist" worldwide in an article in Science, based on rankings from the Semantic Scholar search engine.




Lorenzo Cavallaro, Professor

University College London (UCL), England

Talk Title: Trustworthy AI... for Systems Security

Lorenzo CavallaroLorenzo Cavallaro


Abstract:
No day goes by without reading about machine learning (ML) success stories in every walk of life. Systems security is no exception, where ML’s tantalizing performance may leave us wondering whether any problems remain unsolved. Yet ML has no clairvoyant abilities, and once the magic wears off, we are left in uncharted territory. Can it truly help us build secure systems? In this talk, I will argue that performance alone is not enough. I will highlight the consequences of adversarial attacks and distribution shifts in realistic settings, and discuss how semantics may provide a path forward. My goal is to foster a deeper understanding of machine learning’s role in systems security and its potential for future advancements.

Biography

Lorenzo Cavallaro grew up on pizza, spaghetti, and Phrack, and soon developed a passion for underground and academic research. He is a Full Professor of Computer Science at University College London (UCL), where he leads the Systems Security Research Lab. Lorenzo’s research vision is to enhance the effectiveness of machine learning for systems security in adversarial settings. To this end, he and his team investigate the interplay among program analysis abstractions, engineered and learned representations, and grounded models, and their crucial role in creating Trustworthy AI for Systems Security. Lorenzo publishes at and sits on the Program Committee of leading conferences in computer security and ML, received the Distinguished Paper Award at USENIX Security 2022, ICML 2024 Spotlight Paper, and DLSP 2025 Best Paper Award (co-located with IEEE S&P)/ He is also Associate Editor of ACM TOPS and IEEE TDSC. In addition to his love for food, Lorenzo finds his Flow in science, music, and family.