Phantom: Privacy-Preserving Deep Neural Network Model Obfuscation in Heterogeneous TEE and GPU System
Published in USENIX Security Symposium (USENIX Security), 2025
Privacy-preserving deep neural network (DNN) model inference is critical in many real-world applications. Trusted Execution Environments (TEEs) provide hardware-based security guarantees but suffer from performance bottlenecks due to limited computational resources. While GPU acceleration is essential for DNN workloads, integrating GPUs with TEEs introduces new challenges in maintaining model privacy. In this work, we present Phantom, a privacy-preserving DNN model obfuscation framework that operates in heterogeneous TEE and GPU systems.
Recommended citation: J. Bai, M. H. I. Chowdhuryy, J. Li, F. Yao, C. Chakrabarti, and D. Fan, “Phantom: Privacy-Preserving Deep Neural Network Model Obfuscation in Heterogeneous TEE and GPU System,” in USENIX Security, 2025.
Recommended citation: J. Bai, M. H. I. Chowdhuryy, J. Li, F. Yao, C. Chakrabarti, and D. Fan, "Phantom: Privacy-Preserving Deep Neural Network Model Obfuscation in Heterogeneous TEE and GPU System," in USENIX Security, 2025.
Download Paper
