One Paper Accepted by CVPR 2026
Our paper “SEBA: Sample-Efficient Black-Box Attacks on Visual Reinforcement Learning” is accepted by CVPR 2026. Congratulations to Tairan!
Our paper “SEBA: Sample-Efficient Black-Box Attacks on Visual Reinforcement Learning” is accepted by CVPR 2026. Congratulations to Tairan!
I am appointed as Senior Area Editor of IEEE Transactions on Information Forensics and Security (TIFS).
Our papers “How Green Is Your Login? A Cross-Protocol Benchmark of Authentication Energy & Latency” and “Decoding Web Memorization: A Semantic Membership Inference Attack on LLMs” are accepted by The Web Conference (WWW’26). Congratulations to Weizheng and Zi!
Our paper “BeeKeeper: Securing Cross-Technology Communication via Channel-Aware Dual-Binding” is accepted by INFOCOM 2026. Congratulations to Weizheng!
Our papers “United We Defend: Collaborative Membership Inference Defenses in Federated Learning” and “The Prompt Stealing Fallacy: Rethinking Metrics, Attacks, and Defenses” are accepted by 35th USENIX Security Symposium (USENIX Sec), 2026. Congratulations to Li and Zehang!
Our paper “WiFinger: Fingerprinting Noisy IoT Event Traffic Using Packet-level Sequence Matching” is accepted by NDSS 2026. Congratulations to Ronghua!
Our papers “‘Adversarial Signed Graph Learning with Differential Privacy” and “Communication-efficient Federated Graph Classification via Generative Diffusion Modeling” are accepted by SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2026. Congratulations to Haobin and Xiuling!
Our papers “DIFT: Protecting Contrastive Learning against Data Poisoning Backdoor Attacks”, “Class-feature Watermark: A Resilient Black-box Watermark Against Model Extraction Attacks”, “How Much Do Large Language Model Cheat on Evaluation? Benchmarking Overestimation under the One-Time-Pad-Based Framework”, and “Stochastic Universal Adversarial Perturbations with Fixed Optimization Constraint and Ensured High-probability Transferability” have been accepted by AAAI ’26….
Our papers “‘Virus Infection Attack on LLMs: Your Poisoning Can Spread “VIA” Synthetic Data” and “Toward Efficient Inference Attacks: Shadow Model Sharing via Mixture-of-Experts” are accepted by Annual Conference on Neural Information Processing Systems (NeurIPS), 2025. Congratulations to Zi and Li!