Security Knowledge-Guided Fuzzing of Deep Learning Libraries

06/05/2023
by   Nima Shiri Harzevili, et al.
0

There have been many Deep Learning (DL) fuzzers proposed in the literature. However, most of them only focused on high-level APIs that are used by users, which results in a large number of APIs used by library developers being untested. Additionally, they use general input generation rules to generate malformed inputs such as random value generation and boundary-input generation, which are ineffective to generate DL-specific malformed inputs. To fill this gap, we first conduct an empirical study regarding root cause analysis on 447 history security vulnerabilities of two of the most popular DL libraries, i.e., PyTorch and TensorFlow, for characterizing and understanding their malicious inputs. As a result, we categorize 18 rules regarding the construction of malicious inputs, which we believe can be used to generate effective malformed inputs for testing DL libraries. We further design and implement Orion, a new fuzzer that tests DL libraries by utilizing our malformed input generation rules mined from real-world deep learning security vulnerabilities. Specifically, Orion first collects API invocation code from various sources such as API documentation, source code, developer tests, and publicly available repositories on GitHub. Then Orion instruments these code snippets to dynamically trace execution information for each API such as parameters' types, shapes, and values. Then, Orion combines the malformed input generation rules and the dynamic execution information to create inputs to test DL libraries. Our evaluation on TensorFlow and PyTorch shows that Orion reports 143 bugs and 68 of which are previously unknown. Among the 68 new bugs, 58 have been fixed or confirmed by developers after we report them and the left are awaiting confirmation. Compared to the state-of-the-art DL fuzzers (i.e., FreeFuzz and DocTer), Orion detects 21

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset