Will Dabney
Will Dabney
Verified email at google.com - Homepage
Cited by
Cited by
Rainbow: Combining improvements in deep reinforcement learning
M Hessel, J Modayil, H Van Hasselt, T Schaul, G Ostrovski, W Dabney, ...
Thirty-second AAAI conference on artificial intelligence, 2018
A distributional perspective on reinforcement learning
MG Bellemare*, W Dabney*, R Munos
arXiv preprint arXiv:1707.06887, 2017
Successor features for transfer in reinforcement learning
A Barreto, W Dabney, R Munos, JJ Hunt, T Schaul, H Van Hasselt, ...
arXiv preprint arXiv:1606.05312, 2016
Distributed distributional deterministic policy gradients
G Barth-Maron, MW Hoffman, D Budden, W Dabney, D Horgan, D Tb, ...
arXiv preprint arXiv:1804.08617, 2018
Distributional reinforcement learning with quantile regression
W Dabney, M Rowland, MG Bellemare, R Munos
Thirty-Second AAAI Conference on Artificial Intelligence, 2018
The cramer distance as a solution to biased wasserstein gradients
MG Bellemare, I Danihelka, W Dabney, S Mohamed, ...
arXiv preprint arXiv:1705.10743, 2017
Recurrent experience replay in distributed reinforcement learning
S Kapturowski, G Ostrovski, J Quan, R Munos, W Dabney
International conference on learning representations, 2018
Implicit quantile networks for distributional reinforcement learning
W Dabney, G Ostrovski, D Silver, R Munos
International conference on machine learning, 1096-1105, 2018
A distributional code for value in dopamine-based reinforcement learning
W Dabney, Z Kurth-Nelson, N Uchida, CK Starkweather, D Hassabis, ...
Nature 577 (7792), 671-675, 2020
An analysis of categorical distributional reinforcement learning
M Rowland, M Bellemare, W Dabney, R Munos, YW Teh
International Conference on Artificial Intelligence and Statistics, 29-37, 2018
The reactor: A fast and sample-efficient actor-critic agent for reinforcement learning
A Gruslys, W Dabney, MG Azar, B Piot, M Bellemare, R Munos
arXiv preprint arXiv:1704.04651, 2017
Adaptive step-size for online temporal difference learning
W Dabney, AG Barto
Twenty-Sixth AAAI Conference on Artificial Intelligence, 2012
Autoregressive quantile networks for generative modeling
G Ostrovski, W Dabney, R Munos
International Conference on Machine Learning, 3936-3945, 2018
RLPy: a value-function-based reinforcement learning framework for education and research.
A Geramifard, C Dann, RH Klein, W Dabney, JP How
J. Mach. Learn. Res. 16 (1), 1573-1578, 2015
A geometric perspective on optimal representations for reinforcement learning
M Bellemare, W Dabney, R Dadashi, A Ali Taiga, PS Castro, N Le Roux, ...
Advances in neural information processing systems 32, 4358-4369, 2019
Deep reinforcement learning and its neuroscientific implications
M Botvinick, JX Wang, W Dabney, KJ Miller, Z Kurth-Nelson
Neuron, 2020
Proximal reinforcement learning: A new theory of sequential decision making in primal-dual spaces
S Mahadevan, B Liu, P Thomas, W Dabney, S Giguere, N Jacek, I Gemp, ...
arXiv preprint arXiv:1405.6757, 2014
Fast task inference with variational intrinsic successor features
S Hansen, W Dabney, A Barreto, T Van de Wiele, D Warde-Farley, V Mnih
arXiv preprint arXiv:1906.05030, 2019
Revisiting fundamentals of experience replay
W Fedus, P Ramachandran, R Agarwal, Y Bengio, H Larochelle, ...
International Conference on Machine Learning, 3061-3071, 2020
Projected Natural Actor-Critic.
PS Thomas, W Dabney, S Giguere, S Mahadevan
NIPS, 2337-2345, 2013
The system can't perform the operation now. Try again later.
Articles 1–20