GameSec 2026

Conference on Game Theory and AI for Security

October 26-28, 2026, Ann Arbor, Michigan, USA

Important Dates

Submission

June 12, 2026

Decision Notification

July 31, 2026

Camera-ready

August 7, 2026

Author Registration Deadline

TBA

General Description

The 17th Conference on Game Theory and AI for Security (GameSec-26) will take place October 26-28, 2026 in Ann Arbor, Michigan, USA.

With the rapid advancement of artificial intelligence, game theory, and security technologies, the resilience and trustworthiness of modern systems is more critical than ever. The 2026 Conference on Game Theory and AI for Security focuses on leveraging strategic decision-making, adversarial reasoning, and computational intelligence to address security challenges in complex and dynamic environments.

The conference invites novel, high-quality theoretical and empirical contributions that apply game theory, AI, and related methodologies to security, privacy, trust, and fairness in emerging systems. The goal is to bring together researchers from academia, industry, and government to explore interdisciplinary connections between game theory, reinforcement learning, adversarial machine learning, mechanism design, risk assessment, behavioral modeling, and cybersecurity. Through rigorous and practically relevant analytical methods, the conference aims to advance the understanding and application of AI-driven strategies for securing critical infrastructures and emerging technologies.

Conference Topics

Indicative topics, but not exhaustive, are listed below, and the conference welcomes a broad range of contributions exploring the intersection of game theory, AI, and security.

Conference Topics
  • Stackelberg and Bayesian games for cybersecurity
  • Mechanism design for secure and resilient systems
  • Multi-agent security games and adversarial interactions
  • Dynamic and repeated games in security applications
  • Coalitional game theory for trust and privacy
  • Evolutionary game theory in cyber defense
  • Game-theoretic models for deception and misinformation detection
  • Auction-based security mechanisms for resource allocation
  • Nash equilibria in adversarial security settings
  • Aggregative Games for security
  • Adversarial machine learning and robust AI models
  • Reinforcement learning for cyber defense strategies
  • AI-driven risk assessment and threat intelligence
  • Secure federated learning and privacy-preserving AI
  • AI for zero-trust architectures and intrusion detection
  • Explainable AI in security decision-making
  • Large language models for cybersecurity applications
  • AI-powered malware and phishing detection
  • Automated penetration testing and ethical hacking using AI
  • Game-theoretic approaches for securing IoT and edge computing
  • Security strategies for autonomous systems and UAVs
  • AI-driven attack detection in smart grids and critical infrastructures
  • Secure network protocols and AI-powered anomaly detection
  • Blockchain and game theory for decentralized security
  • Cyber-physical system resilience through game-theoretic modeling
  • Security strategies for smart cities and intelligent transportation systems
  • AI-enhanced situational awareness in cyber-physical environments
  • Incentive mechanisms for cybersecurity investments
  • Human-in-the-loop security and behavioral game theory
  • Trust and reputation models in decentralized systems
  • AI-powered fraud detection in financial systems
  • Privacy-aware mechanism design and data-sharing incentives
  • Economic impact of cyber threats and attack mitigation strategies
  • Psychological and cognitive biases in security decision-making
  • Red teaming and AI-generated attack simulations
  • Robust AI models against adversarial perturbations
  • AI-powered misinformation and propaganda detection
  • Security challenges in generative AI and large language models
  • Ethical AI and fairness in security decision-making
  • AI for detecting and mitigating deepfake threats
  • Secure AI model training and adversarial robustness testing
  • Reinforcement learning under adversarial conditions
  • Game-theoretic approaches to securing blockchain networks
  • AI for decentralized identity and authentication management
  • Security challenges in multi-agent and swarm intelligence systems
  • Incentive-driven security solutions for distributed systems
  • AI-powered smart contract verification and fraud detection
  • Secure consensus mechanisms in blockchain and distributed ledgers
  • AI-driven security in autonomous transportation
  • Game theory for cloud security and access control
  • AI-enhanced cyber resilience in government and military networks
  • AI for misinformation mitigation in social networks
  • AI and game theory applications in healthcare cybersecurity
  • Security in quantum computing and post-quantum cryptography
  • AI-powered cybersecurity solutions for industrial control systems
  • AI in securing 5G/6G and next-generation communication networks

Submission Instructions

All submissions should be submitted through OpenReview. Unless the authors opt out, submissions will be considered for acceptance as either oral or poster presentations at the conference. Oral presentations will be accompanied by full papers published in the proceedings.

Each submission should be previously unpublished work that is not currently under submission to another conference. Submissions can be (but do not have to be) anonymized. Submissions are not to exceed 20 pages including references and well-formatted appendices. The papers should make a strong technical contribution and adequately highlight the novel aspects of the work in relation to related research.

Prospective authors are encouraged to register their paper by uploading title, authors, and a short abstract (a few hundred words, not an extended abstract) before the paper submission deadline, 23:59 AoE time. Then, authors can update the title, authors, abstract and submit a PDF file with the full paper by the full paper submission deadline, 23:59 AoE time. Submitting an abstract is not mandatory. Authors can still submit a full paper by the final deadline without having previously submitted an abstract.

Paper Preparation

All submissions, except journal track, must adhere to the Springer's Lecture Notes in Computer Science (LNCS) format. For a detailed description, please consult the Author Guidelines for the Preparation of Contributions to Springer Computer Science Proceedings (version of 26-FEB-2015 in PDF). The LaTex2e template can be downloaded (in zip format). As a sample, the source of the LaTex2e template are also available in the scientific authoring platform Overleaf (online). Although discouraged to use, template files for Office 2007 Word and Office 2003 Word are also provided. The authors are advised to carefully read the explanatory typing instruction documents contained within the corresponding archive file. Please note that the wide empty margins are to be expected. The LNCS series will be published in a smaller than A4 cut, and hence the margins will be rectified by the publisher.

Authors should consult Springer's authors' guidelines and use their proceedings templates, either for LaTeX or for Word, for the preparation of their papers. Springer encourages authors to include their ORCIDs in their papers. In addition, the corresponding author of each paper, acting on behalf of all of the authors of that paper, must complete and sign a Consent-to-Publish form. The corresponding author signing the copyright form should match the corresponding author marked on the paper. Once the files have been sent to Springer, changes relating to the authorship of the papers cannot be made.