Skip to content

AUTHORS

Call for Papers

We invite original research on generative AI security, adversarial ML, privacy, and trustworthy systems. All accepted papers will be published in IEEE proceedings and indexed in IEEE Xplore.

Research Tracks

Topics & Categories

GAISS 2026 features six focused tracks. Select the track that best fits your work when submitting.

Track 01

LLM Security & Robustness

Defense mechanisms for large language models against adversarial attacks, prompt injection, jailbreaking, and model manipulation.

  • Prompt injection detection & defense
  • Jailbreak prevention techniques
  • Model hardening & robustness evaluation
  • Adversarial text generation
  • Safety alignment methods

Track 02

Privacy-Preserving AI

Techniques for training and deploying AI while protecting sensitive data, including federated learning, differential privacy, and secure inference.

  • Federated learning security
  • Differential privacy for neural networks
  • Secure multi-party computation
  • Privacy-preserving inference
  • Data anonymization for ML

Track 03

AI for Cybersecurity

Using generative AI and ML for threat detection, incident response, vulnerability analysis, and autonomous security operations.

  • AI-powered threat detection
  • Automated incident response
  • Vulnerability discovery with LLMs
  • Malware analysis & classification
  • Security operations automation

Track 04

Adversarial Machine Learning

Attacks and defenses across the ML pipeline, including data poisoning, model extraction, membership inference, and evasion attacks.

  • Data poisoning & backdoor attacks
  • Model extraction & stealing
  • Membership inference attacks
  • Evasion & transferability attacks
  • Certified robustness methods

Track 05

Secure AI Development

Engineering practices for building secure AI systems, including MLOps security, supply chain integrity, and secure-by-design frameworks.

  • Secure ML pipelines & MLOps
  • Model supply chain integrity
  • AI bill of materials (AI-BOM)
  • Secure model serving & deployment
  • Reproducibility & auditability

Track 06

Ethics, Alignment & Governance

Responsible AI practices including bias mitigation, transparency, fairness, regulatory compliance, and AI governance frameworks.

  • Bias detection & mitigation
  • AI transparency & explainability
  • Regulatory compliance (EU AI Act, etc.)
  • AI governance frameworks
  • Human-AI alignment research

Call for Papers

Submit Your Research

Submissions undergo double-blind peer review. Accepted papers appear in IEEE proceedings and are indexed in IEEE Xplore. We invite full papers, short papers, and workshop proposals on generative AI, ML security, privacy-preserving computation, and closely related topics.

Representative themes include adversarial robustness, prompt and tool-use abuse, federated and distributed learning security, model extraction and misuse, and safety evaluation. See the guidelines for formatting and page limits.

Research and academic collaboration

Important Dates

Paper Submission DeadlineJune 15, 2026
Notification of AcceptanceAugust 1, 2026
Camera-Ready DeadlineAugust 30, 2026
Early Bird RegistrationSeptember 28, 2026
Conference DatesOctober 28–30, 2026

01

Full Papers

8 to 12 pages of original research, systems work, or rigorous evaluation with reproducible results.

02

Short Papers

4 to 6 pages covering early results, position statements, or concise technical contributions.

03

Workshop Proposals

2 to 4 page proposals for curated half-day or full-day sessions aligned with GAISS themes.

FAQ

Frequently Asked Questions

Everything you need to know about attending GAISS 2026.