Robust physical-world attacks on deep learning visual classification K Eykholt, I Evtimov, E Fernandes, B Li, A Rahmati, C Xiao, A Prakash, ... Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2018 | 1937* | 2018 |
Targeted backdoor attacks on deep learning systems using data poisoning X Chen, C Liu, B Li, K Lu, D Song arXiv preprint arXiv:1712.05526, 2017 | 749 | 2017 |
Generating adversarial examples with adversarial networks C Xiao, B Li, JY Zhu, W He, M Liu, D Song arXiv preprint arXiv:1801.02610, 2018 | 517 | 2018 |
Robust physical-world attacks on machine learning models I Evtimov, K Eykholt, E Fernandes, T Kohno, B Li, A Prakash, A Rahmati, ... arXiv preprint arXiv:1707.08945 2 (3), 4, 2017 | 504 | 2017 |
Manipulating machine learning: Poisoning attacks and countermeasures for regression learning M Jagielski, A Oprea, B Biggio, C Liu, C Nita-Rotaru, B Li 2018 IEEE Symposium on Security and Privacy (SP), 19-35, 2018 | 502 | 2018 |
Characterizing adversarial subspaces using local intrinsic dimensionality X Ma, B Li, Y Wang, SM Erfani, S Wijewickrema, G Schoenebeck, D Song, ... arXiv preprint arXiv:1801.02613, 2018 | 488 | 2018 |
Deepgauge: Multi-granularity testing criteria for deep learning systems L Ma, F Juefei-Xu, F Zhang, J Sun, M Xue, B Li, C Chen, T Su, L Li, Y Liu, ... Proceedings of the 33rd ACM/IEEE International Conference on Automated …, 2018 | 428 | 2018 |
Spatially transformed adversarial examples C Xiao, JY Zhu, B Li, W He, M Liu, D Song arXiv preprint arXiv:1801.02612, 2018 | 387 | 2018 |
Textbugger: Generating adversarial text against real-world applications J Li, S Ji, T Du, B Li, T Wang arXiv preprint arXiv:1812.05271, 2018 | 328 | 2018 |
Physical adversarial examples for object detectors D Song, K Eykholt, I Evtimov, E Fernandes, B Li, A Rahmati, F Tramer, ... 12th {USENIX} Workshop on Offensive Technologies ({WOOT} 18), 2018 | 282 | 2018 |
Deepmutation: Mutation testing of deep learning systems L Ma, F Zhang, J Sun, M Xue, B Li, F Juefei-Xu, C Xie, L Li, Y Liu, J Zhao, ... 2018 IEEE 29th International Symposium on Software Reliability Engineering …, 2018 | 254 | 2018 |
Data poisoning attacks on factorization-based collaborative filtering B Li, Y Wang, A Singh, Y Vorobeychik Advances in neural information processing systems 29, 2016 | 239 | 2016 |
Data Poisoning Attacks on Factorization-based Collaborative Filtering YV B. Li, Y. Wang, A. Singh In Proceedings of the Neural Information Processing Systems (NIPS), 2016 | 239* | 2016 |
DBA: Distributed Backdoor Attacks against Federated Learning C Xie, K Huang, PY Chen, B Li International Conference on Learning Representations, 2019 | 213 | 2019 |
Deephunter: A coverage-guided fuzz testing framework for deep neural networks X Xie, L Ma, F Juefei-Xu, M Xue, H Chen, Y Liu, J Zhao, B Li, J Yin, S See Proceedings of the 28th ACM SIGSOFT International Symposium on Software …, 2019 | 211 | 2019 |
Towards efficient data valuation based on the shapley value R Jia, D Dao, B Wang, FA Hubis, N Hynes, NM Gürel, B Li, C Zhang, ... The 22nd International Conference on Artificial Intelligence and Statistics …, 2019 | 172 | 2019 |
Towards stable and efficient training of verifiably robust neural networks H Zhang, H Chen, C Xiao, S Gowal, R Stanforth, B Li, D Boning, CJ Hsieh arXiv preprint arXiv:1906.06316, 2019 | 166 | 2019 |
Practical black-box attacks on deep neural networks using efficient query mechanisms AN Bhagoji, W He, B Li, D Song Proceedings of the European Conference on Computer Vision (ECCV), 154-169, 2018 | 166 | 2018 |
Deepct: Tomographic combinatorial testing for deep learning systems L Ma, F Juefei-Xu, M Xue, B Li, L Li, Y Liu, J Zhao 2019 IEEE 26th International Conference on Software Analysis, Evolution and …, 2019 | 164* | 2019 |
Automated poisoning attacks and defenses in malware detection systems: An adversarial machine learning approach S Chen, M Xue, L Fan, S Hao, L Xu, H Zhu, B Li computers & security 73, 326-344, 2018 | 157 | 2018 |