Client selection for federated learning with heterogeneous resources in mobile edge T Nishio, R Yonetani ICC 2019-2019 IEEE international conference on communications (ICC), 1-7, 2019 | 1692 | 2019 |
Hybrid-FL for wireless networks: Cooperative learning mechanism using non-IID data N Yoshida, T Nishio, M Morikura, K Yamamoto, R Yonetani ICC 2020-2020 IEEE International Conference On Communications (ICC), 1-7, 2020 | 263* | 2020 |
Future person localization in first-person videos T Yagi, K Mangalam, R Yonetani, Y Sato Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2018 | 224 | 2018 |
Can eye help you? Effects of visualizing eye fixations on remote collaboration scenarios for physical tasks K Higuch, R Yonetani, Y Sato Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems …, 2016 | 133 | 2016 |
Path planning using neural a* search R Yonetani, T Taniai, M Barekatain, M Nishimura, A Kanezaki International conference on machine learning, 12029-12039, 2021 | 104 | 2021 |
Recognizing micro-actions and reactions from paired egocentric videos R Yonetani, KM Kitani, Y Sato Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2016 | 95 | 2016 |
Degree of interest estimating device and degree of interest estimating method K Sakata, S Maeda, R Yonetani, H Kawashima, T Hirayama, ... US Patent 9,538,219, 2017 | 83 | 2017 |
Privacy-preserving visual learning using doubly permuted homomorphic encryption R Yonetani, V Naresh Boddeti, KM Kitani, Y Sato Proceedings of the IEEE international conference on computer vision, 2040-2050, 2017 | 81 | 2017 |
Computational models of human visual attention and their implementations: A survey A Kimura, R Yonetani, T Hirayama IEICE TRANSACTIONS on Information and Systems 96 (3), 562-578, 2013 | 65 | 2013 |
Ego-surfing first-person videos R Yonetani, KM Kitani, Y Sato Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2015 | 55 | 2015 |
Egoscanning: Quickly scanning first-person videos with egocentric elastic timelines K Higuchi, R Yonetani, Y Sato Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems …, 2017 | 54 | 2017 |
Decentralized learning of generative adversarial networks from multi-client non-iid data R Yonetani, T Takahashi, A Hashimoto, Y Ushiku arXiv preprint arXiv:1905.09684, 2019 | 45 | 2019 |
Gaze target determination device and gaze target determination method K Sakata, S Maeda, R Yonetani, H Kawashima, T Hirayama, ... US Patent 8,678,589, 2014 | 40 | 2014 |
L2b: Learning to balance the safety-efficiency trade-off in interactive crowd-aware robot navigation M Nishimura, R Yonetani 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems …, 2020 | 38 | 2020 |
Precise multi-modal in-hand pose estimation using low-precision sensors for robotic assembly F von Drigalski, K Hayashi, Y Huang, R Yonetani, M Hamaya, K Tanaka, ... 2021 IEEE International Conference on Robotics and Automation (ICRA), 968-974, 2021 | 33 | 2021 |
Multi-mode saliency dynamics model for analyzing gaze and attention R Yonetani, H Kawashima, T Matsuyama Proceedings of the symposium on eye tracking research and applications, 115-122, 2012 | 33 | 2012 |
Multipolar: Multi-source policy aggregation for transfer reinforcement learning between diverse environmental dynamics M Barekatain, R Yonetani, M Hamaya arXiv preprint arXiv:1909.13111, 2019 | 32 | 2019 |
Mental focus analysis using the spatio-temporal correlation between visual saliency and eye movements R Yonetani, H Kawashima, T Hirayama, T Matsuyama Journal of information Processing 20 (1), 267-276, 2012 | 31 | 2012 |
Support strategies for remote guides in assisting people with visual impairments for effective indoor navigation R Kamikubo, N Kato, K Higuchi, R Yonetani, Y Sato Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems …, 2020 | 25 | 2020 |
Crowd density forecasting by modeling patch-based dynamics H Minoura, R Yonetani, M Nishimura, Y Ushiku IEEE Robotics and Automation Letters 6 (2), 287-294, 2020 | 19 | 2020 |