Follow
Haoqi Li
Title
Cited by
Cited by
Year
Speaker-invariant affective representation learning via adversarial training
H Li, M Tu, J Huang, S Narayanan, P Georgiou
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
602020
Self-supervised speaker verification with simple siamese network and self-supervised regularization
M Sang, H Li, F Liu, AO Arnold, L Wan
ICASSP 2022-2022 IEEE international conference on acoustics, speech and …, 2022
352022
Learning from past mistakes: improving automatic speech recognition output via noisy-clean phrase context modeling
PG Shivakumar, H Li, K Knight, P Georgiou
APSIPA Transactions on Signal and Information Processing 8, e8, 2019
312019
A deep reinforcement learning framework for Identifying funny scenes in movies
H Li, N Kumar, R Chen, P Georgiou
2018 IEEE International Conference on Acoustics, Speech and Signal …, 2018
262018
Unsupervised Latent Behavior Manifold Learning from Acoustic Features: audio2behavior
H Li, B Baucom, P Georgiou
Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International …, 2017
172017
Linking emotions to behaviors through deep transfer learning
H Li, B Baucom, P Georgiou
PeerJ Computer Science 6, e246, 2020
152020
Automatic prediction of suicidal risk in military couples using multimodal interaction cues from couples conversations
SN Chakravarthula, M Nasir, SY Tseng, H Li, TJ Park, B Baucom, ...
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
142020
Sparsely Connected and Disjointly Trained Deep Neural Networks for Low Resource Behavioral Annotation: Acoustic Classification in Couples' Therapy
H Li, B Baucom, P Georgiou
Interspeech 2016, 1407--1411, 2016
142016
An empirical analysis of information encoded in disentangled neural speaker representations
R Peri, H Li, K Somandepalli, A Jati, S Narayanan
Proc. Odyssey 2020 The Speaker and Language Recognition Workshop, 194--201, 2020
132020
Predicting behavior in cancer-afflicted patient and spouse interactions using speech and language
SN Chakravarthula, H Li, SY Tseng, M Reblin, P Georgiou
Proc. Interspeech 2019, 3073--3077, 2019
122019
" Honey, I Learned to Talk" Multimodal Fusion for Behavior Analysis
SY Tseng, H Li, B Baucom, P Georgiou
Proceedings of the 20th ACM International Conference on Multimodal …, 2018
112018
Acted vs. improvised: Domain adaptation for elicitation approaches in audio-visual emotion recognition
H Li, Y Kim, CH Kuo, S Narayanan
arXiv preprint arXiv:2104.01978, 2021
92021
Deep reinforcement learning framework for characterizing video content
R Chen, N Kumar, H Li
US Patent 10,885,341, 2021
82021
Zero-shot end-to-end spoken language understanding via cross-modal selective self-training
J He, J Salazar, K Yao, H Li, J Cai
arXiv preprint arXiv:2305.12793, 2023
72023
Emotion Expression Estimates to Measure and Improve Multimodal Social-Affective Interactions
JA Brooks, V Tiruvadi, A Baird, P Tzirakis, H Li, C Gagne, M Oh, A Cowen
Companion Publication of the 25th International Conference on Multimodal …, 2023
12023
Unsupervised speech representation learning for behavior modeling using triplet enhanced contextualized networks
H Li, B Baucom, S Narayanan, P Georgiou
Computer Speech & Language 70, 101226, 2021
12021
Deep reinforcement learning framework for sequence level prediction of high dimensional data
R Chen, N Kumar, H Li
US Patent 11,829,878, 2023
2023
Deep reinforcement learning framework for characterizing video content
R Chen, N Kumar, H Li
US Patent 11,386,657, 2022
2022
USC-SIPI Report# 450 Behavior Understanding from Speech under Constrained Conditions: Exploring Sparse Networks, Transfer and
H Li
2020
The system can't perform the operation now. Try again later.
Articles 1–19