Data programming: Creating large training sets, quickly AJ Ratner, CM De Sa, S Wu, D Selsam, C Ré Advances in neural information processing systems 29, 2016 | 854 | 2016 |
Representation tradeoffs for hyperbolic embeddings F Sala, C De Sa, A Gu, C Ré International conference on machine learning, 4460-4469, 2018 | 466 | 2018 |
Improving neural network quantization without retraining using outlier channel splitting R Zhao, Y Hu, J Dotzel, C De Sa, Z Zhang International conference on machine learning, 7543-7552, 2019 | 344 | 2019 |
Incremental knowledge base construction using deepdive J Shin, S Wu, F Wang, C De Sa, C Zhang, C Ré Proceedings of the VLDB Endowment International Conference on Very Large …, 2015 | 305 | 2015 |
A kernel theory of modern data augmentation T Dao, A Gu, A Ratner, V Smith, C De Sa, C Ré International conference on machine learning, 1528-1537, 2019 | 216 | 2019 |
Channel gating neural networks W Hua, Y Zhou, CM De Sa, Z Zhang, GE Suh Advances in Neural Information Processing Systems, 1884-1894, 2019 | 210 | 2019 |
Taming the wild: A unified analysis of hogwild-style algorithms CM De Sa, C Zhang, K Olukotun, C Ré Advances in neural information processing systems 28, 2015 | 208 | 2015 |
Global convergence of stochastic gradient descent for some non-convex matrix problems C De Sa, C Re, K Olukotun International conference on machine learning, 2332-2341, 2015 | 199 | 2015 |
Understanding and optimizing asynchronous low-precision stochastic gradient descent C De Sa, M Feldman, C Ré, K Olukotun Proceedings of the 44th annual international symposium on computer …, 2017 | 186 | 2017 |
High-accuracy low-precision training C De Sa, M Leszczynski, J Zhang, A Marzoev, CR Aberger, K Olukotun, ... arXiv preprint arXiv:1803.03383, 2018 | 129 | 2018 |
Pipemare: Asynchronous pipeline parallel dnn training B Yang, J Zhang, J Li, C Ré, C Aberger, C De Sa Proceedings of Machine Learning and Systems 3, 269-296, 2021 | 125 | 2021 |
Parallel SGD: When does averaging help? J Zhang, C De Sa, I Mitliagkas, C Ré arXiv preprint arXiv:1606.07365, 2016 | 125 | 2016 |
Deepdive: Declarative knowledge base construction C De Sa, A Ratner, C Ré, J Shin, F Wang, S Wu, C Zhang ACM SIGMOD Record 45 (1), 60-67, 2016 | 115 | 2016 |
SWALP: Stochastic weight averaging in low precision training G Yang, T Zhang, P Kirichenko, J Bai, AG Wilson, C De Sa International Conference on Machine Learning, 7015-7024, 2019 | 107 | 2019 |
Generating configurable hardware from parallel patterns R Prabhakar, D Koeplinger, KJ Brown, HJ Lee, C De Sa, C Kozyrakis, ... Acm Sigplan Notices 51 (4), 651-665, 2016 | 105 | 2016 |
Accelerated stochastic power iteration P Xu, B He, C De Sa, I Mitliagkas, C Re International Conference on Artificial Intelligence and Statistics, 58-67, 2018 | 94 | 2018 |
DeepDive: Declarative knowledge base construction C Zhang, C Ré, M Cafarella, C De Sa, A Ratner, J Shin, F Wang, S Wu Communications of the ACM 60 (5), 93-102, 2017 | 93 | 2017 |
Differentiating through the fréchet mean A Lou, I Katsman, Q Jiang, S Belongie, SN Lim, C De Sa International conference on machine learning, 6393-6403, 2020 | 84 | 2020 |
Quip: 2-bit quantization of large language models with guarantees J Chee, Y Cai, V Kuleshov, CM De Sa Advances in Neural Information Processing Systems 36, 2024 | 79 | 2024 |
Optimal complexity in decentralized training Y Lu, C De Sa International conference on machine learning, 7111-7123, 2021 | 79 | 2021 |