Our papers “DIFT: Protecting Contrastive Learning against Data Poisoning Backdoor Attacks”, “Class-feature Watermark: A Resilient Black-box Watermark Against Model Extraction Attacks”, “How Much Do Large Language Model Cheat on Evaluation? Benchmarking Overestimation under the One-Time-Pad-Based Framework”, and “Stochastic Universal Adversarial Perturbations with Fixed Optimization Constraint and Ensured High-probability Transferability” have been accepted by AAAI ’26. Congratulations to Jiang, Yaxin, Zi, and Yulin!
Four Papers Accepted by AAAI ’26