Kevin Swersky
Kevin Swersky
Google Brain
Verified email at cs.toronto.edu - Homepage
Title
Cited by
Cited by
Year
Prototypical networks for few-shot learning
J Snell, K Swersky, RS Zemel
arXiv preprint arXiv:1703.05175, 2017
22742017
Taking the human out of the loop: A review of Bayesian optimization
B Shahriari, K Swersky, Z Wang, RP Adams, N De Freitas
Proceedings of the IEEE 104 (1), 148-175, 2015
19442015
Learning fair representations
R Zemel, Y Wu, K Swersky, T Pitassi, C Dwork
International conference on machine learning, 325-333, 2013
8672013
Scalable bayesian optimization using deep neural networks
J Snoek, O Rippel, K Swersky, R Kiros, N Satish, N Sundaram, M Patwary, ...
International conference on machine learning, 2171-2180, 2015
6032015
Generative moment matching networks
Y Li, K Swersky, R Zemel
International Conference on Machine Learning, 1718-1727, 2015
5872015
Neural networks for machine learning lecture 6a overview of mini-batch gradient descent
G Hinton, N Srivastava, K Swersky
Cited on 14 (8), 2012
506*2012
Multi-task bayesian optimization
K Swersky, J Snoek, RP Adams
Curran Associates, Inc., 2013
4852013
Meta-learning for semi-supervised few-shot classification
M Ren, E Triantafillou, S Ravi, J Snell, K Swersky, JB Tenenbaum, ...
arXiv preprint arXiv:1803.00676, 2018
4102018
Neural networks for machine learning
G Hinton, N Srivastava, K Swersky
Coursera, video lectures 264 (1), 2012
3522012
Predicting deep zero-shot convolutional neural networks using textual descriptions
J Lei Ba, K Swersky, S Fidler
Proceedings of the IEEE International Conference on Computer Vision, 4247-4255, 2015
3392015
The variational fair autoencoder
C Louizos, K Swersky, Y Li, M Welling, R Zemel
arXiv preprint arXiv:1511.00830, 2015
3332015
Lecture 6a overview of mini–batch gradient descent
G Hinton, N Srivastava, K Swersky
Coursera Lecture slides https://class. coursera. org/neuralnets-2012-001 …, 2012
2152012
Freeze-thaw bayesian optimization
K Swersky, J Snoek, RP Adams
arXiv preprint arXiv:1406.3896, 2014
1752014
Input warping for bayesian optimization of non-stationary functions
J Snoek, K Swersky, R Zemel, R Adams
International Conference on Machine Learning, 1674-1682, 2014
1732014
Big self-supervised models are strong semi-supervised learners
T Chen, S Kornblith, K Swersky, M Norouzi, G Hinton
arXiv preprint arXiv:2006.10029, 2020
1702020
Inductive principles for restricted Boltzmann machine learning
B Marlin, K Swersky, B Chen, N Freitas
Proceedings of the thirteenth international conference on artificial …, 2010
1702010
Meta-dataset: A dataset of datasets for learning to learn from few examples
E Triantafillou, T Zhu, V Dumoulin, P Lamblin, U Evci, K Xu, R Goroshin, ...
arXiv preprint arXiv:1903.03096, 2019
1512019
Your classifier is secretly an energy based model and you should treat it like one
W Grathwohl, KC Wang, JH Jacobsen, D Duvenaud, M Norouzi, ...
arXiv preprint arXiv:1912.03263, 2019
1072019
On autoencoders and score matching for energy based models
K Swersky, MA Ranzato, D Buchman, ND Freitas, BM Marlin
Proceedings of the 28th International Conference on Machine Learning (ICML …, 2011
942011
Learning memory access patterns
M Hashemi, K Swersky, J Smith, G Ayers, H Litz, J Chang, C Kozyrakis, ...
International Conference on Machine Learning, 1919-1928, 2018
772018
The system can't perform the operation now. Try again later.
Articles 1–20