Haitham Bou Ammar
Haitham Bou Ammar
Reinforcement Learning Team Leader @ Huawei London & H. Lecturer @ UCL
Vahvistettu sähköpostiosoite verkkotunnuksessa huawei.com - Kotisivu
Nimike
Viittaukset
Viittaukset
Vuosi
Online multi-task learning for policy gradient methods
HB Ammar, E Eaton, P Ruvolo, M Taylor
International conference on machine learning, 1206-1214, 2014
1212014
Controller design for quadrotor uavs using reinforcement learning
H Bou-Ammar, H Voos, W Ertel
2010 IEEE International Conference on Control Applications, 2130-2135, 2010
802010
Reinforcement learning transfer via sparse coding
HB Ammar, K Tuyls, ME Taylor, K Driessens, G Weiss
Proceedings of the 11th international conference on autonomous agents and …, 2012
632012
Unsupervised cross-domain transfer in policy gradient reinforcement learning via manifold alignment
HB Ammar, E Eaton, P Ruvolo, ME Taylor
Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015
572015
An automated measure of mdp similarity for transfer in reinforcement learning
HB Ammar, E Eaton, ME Taylor, DC Mocanu, K Driessens, G Weiss, ...
Workshops at the Twenty-Eighth AAAI Conference on Artificial Intelligence, 2014
572014
Automatically mapped transfer between reinforcement learning tasks via three-way restricted boltzmann machines
HB Ammar, DC Mocanu, ME Taylor, K Driessens, K Tuyls, G Weiss
Joint European Conference on Machine Learning and Knowledge Discovery in …, 2013
53*2013
Autonomous cross-domain knowledge transfer in lifelong policy gradient reinforcement learning
HB Ammar, E Eaton, JM Luna, P Ruvolo
Twenty-fourth international joint conference on artificial intelligence, 2015
522015
Safe policy search for lifelong reinforcement learning with sublinear regret
HB Ammar, R Tutunov, E Eaton
International Conference on Machine Learning, 2361-2369, 2015
512015
Nonlinear tracking and landing controller for quadrotor aerial robots
H Voos, H Bou-Ammar
Control Applications (CCA), 2010 IEEE International Conference on, 2136-2141, 2010
492010
Factored four way conditional restricted boltzmann machines for activity recognition
DC Mocanu, HB Ammar, D Lowet, K Driessens, A Liotta, G Weiss, K Tuyls
Pattern Recognition Letters 66, 100-108, 2015
402015
Evolution of cooperation in arbitrary complex networks
B Ranjbar-Sahraei, H Bou Ammar, D Bloembergen, K Tuyls, G Weiss
Proceedings of the 2014 international conference on Autonomous agents and …, 2014
402014
Theoretically-grounded policy advice from multiple teachers in reinforcement learning settings with applications to negative transfer
Y Zhan, HB Ammar
arXiv preprint arXiv:1604.03986, 2016
372016
Reduced reference image quality assessment via boltzmann machines
DC Mocanu, G Exarchakos, HB Ammar, A Liotta
2015 IFIP/IEEE International Symposium on Integrated Network Management (IM …, 2015
302015
Balancing two-player stochastic games with soft q-learning
J Grau-Moya, F Leibfried, H Bou-Ammar
arXiv preprint arXiv:1802.03216, 2018
292018
Influencing Social Networks: An Optimal Control Study.
D Bloembergen, BR Sahraei, H Bou-Ammar, K Tuyls, G Weiss
ECAI 14, 105-110, 2014
242014
Optimizing complex automated negotiation using sparse pseudo-input gaussian processes
S Chen, HB Ammar, K Tuyls, G Weiss
Proceedings of the 2013 international conference on Autonomous agents and …, 2013
242013
Distributed newton method for large-scale consensus optimization
R Tutunov, H Bou-Ammar, A Jadbabaie
IEEE Transactions on Automatic Control 64 (10), 3983-3994, 2019
222019
Theory of cooperation in complex social networks
B Ranjbar-Sahraei, HB Ammar, D Bloembergen, K Tuyls, G Weiss
Proceedings of the 25th AAAI conference on artificial intelligence (AAAI-14), 2014
192014
Reinforcement learning transfer via common subspaces
HB Ammar, ME Taylor
International Workshop on Adaptive and Learning Agents, 21-36, 2011
19*2011
Inexpensive user tracking using boltzmann machines
E Mocanu, DC Mocanu, HB Ammar, Z Zivkovic, A Liotta, E Smirnov
2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 1-6, 2014
162014
Järjestelmä ei voi suorittaa toimenpidettä nyt. Yritä myöhemmin uudelleen.
Artikkelit 1–20