AI Security
AI pipelines have widened the attack surface. From data poisoning and prompt injection to model exfiltration and supply chain compromise: ML systems are both a tool and a target.
This track explores the intersection of AI engineering, offensive security, and MLSecOps:
— Adversarial ML: model red teaming and generative jailbreak attacks
— Model and data integrity: securing MLOps pipelines, dataset provenance validation, enforcing trust boundaries, and ensuring data provenance
— MLSecOps in practice. Automated monitoring, anomaly detection across training, deployment, and inference stages. CI/CD for ML: vulnerability scanning, backdoor detection, defense against adversarial attacks and model extraction, data drift management, model auditing, and privacy-preserving techniques (federated learning, encryption).
— AI supply chain security: dependency hardening, model SBOMs, and reproducible training
— Privacy and compliance: data minimization, governance, and auditing of generative systems
— AI ethics: responsible use, transparency, and accountability