This is the research field of investigating machine learning techniques in the presence of adversaries. Since AI and machine learning have been deployed and carrying out critical decision-making tasks, the main objective of adversarial machine learning is to study the robustness, confidentiality, and integrity of machine learning models under various attacks, including but not limited to, data poisoning/backdoor, adversarial samples, model extraction, membership inference, and model inversion attacks.
Model Extraction Attack and Defense
Nowadays, black-box ML models are deployed in the cloud (e.g., Microsoft Azure ML, Amazon AWS ML, and Google Cloud AI) to provide pay-per-query predictive services, known as Machine Learning as a Service (MLaaS). Obviously, ML models are proprietary assets to the providers who spend great efforts training them. However, by exploiting the correspondence between queries and prediction results from an MLaaS model, an adversary could learn the internals of that victim model to a large, or even full extent. Such attack on model’s confidentiality is known as model extraction (ME). It is a fundamental attack in adversarial machine learning because it enables follow-up attacks such as adversarial samples, model evasion, and model inversion attacks that infer private information of the training set.
- Y. Xiao, Q. Ye, H. Hu, H. Zheng, C. Fang, and J. Shi. “MExMI: Pool-based Active Model Extraction Crossover Membership Inference.” 36th Conference on Neural Information Processing Systems (NeurIPS 2022).
- H. Zheng, Q. Ye, H. Hu, F. Cheng, and J. Shi. “Protecting Decision Boundary of Machine Learning Model with Differentially Private Perturbation.” IEEE Transactions on Dependable and Secure Computing (TDSC), 19(3), pp. 2007-2022, May 2022.
- Wen, H. Hu, and H. Zheng. “An extraction attack on image recognition model using VAE-kdtree model.” Proc. SPIE 11766, International Workshop on Advanced Imaging Technology (IWAIT) 2021. (Best Paper Award).
- H. Zheng, Q. Ye, H. Hu, F. Cheng, and J. Shi. “A Boundary Differential Private Layer against Machine Learning Model Extraction Attacks.” Proc. of the 24th European Symposium on Research in Computer Security (ESORICS ’19), Luxembourg, Sept 2019, pp 66-83.
Externally Funded Projects:
- Local Tweaks for Privacy-Preserving Training in Machine Learning at Scale (PI: RGC/GRF, 15210023, 2024-2026, HK$ 1,228,619)
- Sword of Two Edges: Adversarial Machine Learning from Privacy-Aware Users (PI: RGC/GRF, 15226221, 2022-2024, HK$838,393)
- Securing Models and Data for Machine Learning at the Edge (PI: RGC/GRF, 15203120, 2021-2023, HK$845,055)
- Mechanism on Model Privacy Protection (PI: Huawei Collaborative Research, 2020-2022, HK$ 2,304,600)
- AI Model Protection from Inversion Attacks (PI: Huawei Contract Research, 2019-2020, HK$ 764,520)
- H. Hu, H. Zheng, Q. Ye, C. Fang, J. Shi. Data theft prevention method and related product, US Patent Application 17/698,619, 2022.
Membership Inference Attack and Data Privacy
ML models are prone to expose private information about the training data. For example, a downstream task of GPT-2, namely, autocompletion, has been exploited to extract the full name, physical address, email address, phone number, and fax number of an individual successfully. Among attacks severely violating the privacy of data owners, membership inference attack poses particular threats to data owners by inferring whether or not a query example comes from a training dataset. Therefore, MIA becomes a valuable yardstick to measure the risk of a released model.
- H. Yan, A. Yan, L. Hu, J. Liang, and H. Hu. “MTL-Leak: Privacy Risk Assessment in Multi-task Learning”. IEEE Transactions on Dependable and Secure Computing (TDSC), 2023.
- H. Yan, S. Li, Y. Wang, Y. Zhang, K. Sharif, H. Hu, and Y. Li. “Membership Inference Attacks against Deep Learning Models via Logits Distribution”. IEEE Transactions on Dependable and Secure Computing (TDSC), 2022.
- H. Zheng, H. Hu, and Z. Han, “Preserving User Privacy for Machine Learning: Local Differential Privacy or Federated Machine Learning?” IEEE Intelligent Systems. 35(4): pp 5-14, 2020.
- H. Zheng and H. Hu. “MISSILE: A System of Mobile Inertial Sensor-Based Sensitive Indoor Location Eavesdropping.” IEEE Transactions on Information Forensics and Security (TIFS), Volume 15, September 2019, 3137 – 3151.
- H. Zheng, H. Hu, and Han Ziyang. “Preserving User Privacy For Machine Learning: Local Differential Privacy or Federated Machine Learning?” Proc. of 1st International Workshop on Federated Machine Learning for User Privacy and Data Confidentiality (FML’19), in conjunction with IJCAI’19. (Best Theory Paper Award)
Externally Funded Projects:
- Mutual Security Analysis of Machine Learning Models on Untrusted Data Sources数据源和机器学习模型双向安全策略研究 (PI: National Natural Science Foundation of China国家自然科学基金重大研究计划培育项目负责人, 92270123, 2023-2025, CNY 800,000)
- 郑桦迪，叶青青，胡海波.“数据防窃取方法和相关产品”，中国专利发明(China Patent)，申请号201910897929.1, Sep 2019.
Model and Data Poisoning and Integrity
Model and data poisoning refers to those attacks that upload poisoned data or gradient updates in the training process of machine learning and federated learning process. The objective is to manipulate the trained (global) model to satisfy malicious intents. Based on their intents, model poisoning attacks can be classified into targeted and untargeted attacks. Unlike untargeted attacks, which aim to impede the convergence of models, targeted attacks cause these models to perform abnormally on attacker-specified tasks. As a special targeted model poisoning attack, a backdoor attack manipulates the global model to activate backdoor behavior when test examples contain attacker-specified triggers.
- H. Li, Q. Ye, H. Hu, J. Li, L. Wang, C. Fang, J. Shi. “3DFed: Adaptive and Extensible Framework for Covert Backdoor Attack in Federated Learning.” IEEE Symposium on Security and Privacy (S&P), San Francisco, CA, USA, May 2023, pp. 1893-1907.
- Z. Zhao, Y. Cheung, H. Hu, and X. Wu, “Corrupted and Occluded Face Recognition via Cooperative Sparse Representation”, Pattern Recognition, Vol. 56, pp. 77-87, August 2016.
Externally Funded Projects:
- Evasive Federated Learning Attacks through Differential Privacy: Mechanisms and Mitigations (PI: RGC/GRF, 15209922, 2023-2025, HK$941,434)
- Integrity Assurance and Fraud Detection for Machine Learning as a Service机器学习即服务中的防欺诈和完整性验证研究 (PI: National Natural Science Foundation of China国家自然科学基金面上项目负责人, 62072390, 2021-2024, CNY 570,000)
- Auditing Machine Learning as a Service (PI: RGC/GRF, 15218919, 2020-2022, HK$ 731,089)