Confidential Machine Learning Computation in Untrusted Environments: A Systems Security Perspective
As machine learning (ML) technologies and applications are rapidly changing many domains of computing, security issues associated with ML are also emerging. In the domain of systems security, many endeavors have been made to ensure ML model and data confidentiality. ML computations are often inevitably performed in untrusted environments and entail complex multi-party security requirements. Hence, researchers have leveraged the Trusted Execution Environments (TEEs) to build confidential ML computation systems. This paper conducts a systematic and comprehensive survey by classifying attack vectors and mitigation in TEE-protected confidential ML computation in the untrusted environment, analyzes the multi-party ML security requirements, and discusses related engineering challenges.
READ FULL TEXT