Seguir
Kai Zhen
Kai Zhen
Senior Applied Scientist at Amazon
Dirección de correo verificada de amazon.com - Página principal
Título
Citado por
Citado por
Año
Cascaded cross-module residual learning towards lightweight end-to-end speech coding
K Zhen, J Sung, MS Lee, S Beack, M Kim
arXiv preprint arXiv:1906.07769, 2019
472019
Psychoacoustic calibration of loss functions for efficient end-to-end neural audio coding
K Zhen, MS Lee, J Sung, S Beack, M Kim
IEEE Signal Processing Letters 27, 2159-2163, 2020
312020
Scalable and efficient neural speech coding: A hybrid design
K Zhen, J Sung, MS Lee, S Beack, M Kim
IEEE/ACM Transactions on Audio, Speech, and Language Processing 30, 12-25, 2021
21*2021
Efficient and scalable neural residual waveform coding with collaborative quantization
K Zhen, MS Lee, J Sung, S Beack, M Kim
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
202020
Source-aware neural speech coding for noisy speech compression
H Yang, K Zhen, S Beack, M Kim
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
172021
Audio signal encoding method and apparatus and audio signal decoding method and apparatus using psychoacoustic-based weighted error function
J Sung, M Kim, A Sivaraman, K Zhen
US Patent 11,416,742, 2022
152022
Sparsification via compressed sensing for automatic speech recognition
K Zhen, HD Nguyen, FJ Chang, A Mouchtaris, A Rastrow
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
152021
Sub-8-bit quantization aware training for 8-bit neural network accelerator with on-device speech recognition
K Zhen, HD Nguyen, R Chinta, N Susanj, A Mouchtaris, T Afzal, ...
arXiv preprint arXiv:2206.15408, 2022
122022
Sub-8-bit quantization for on-device speech recognition: A regularization-free approach
K Zhen, M Radfar, H Nguyen, GP Strimel, N Susanj, A Mouchtaris
2022 IEEE Spoken Language Technology Workshop (SLT), 15-22, 2023
102023
A dual-staged context aggregation method towards efficient end-to-end speech enhancement
K Zhen, MS Lee, M Kim
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
10*2020
On psychoacoustically weighted cost functions towards resource-efficient deep neural networks for speech denoising
K Zhen, A Sivaraman, J Sung, M Kim
arXiv preprint arXiv:1801.09774, 2018
102018
Conmer: Streaming Conformer without self-attention for interactive voice assistants
M Radfar, P Lyskawa, B Trujillo, Y Xie, K Zhen, J Heymann, D Filimonov, ...
62023
A functional flavor of service composition
L Bao, Q Li, K Zhen, W Xiang, P Chen
2011 Eighth International Conference on Fuzzy Systems and Knowledge …, 2011
22011
AdaZeta: Adaptive Zeroth-Order Tensor-Train Adaption for Memory-Efficient Large Language Models Fine-Tuning
Y Yang, K Zhen, E Banijamal, A Mouchtaris, Z Zhang
arXiv preprint arXiv:2406.18060, 2024
12024
Residual coding method of linear prediction coding coefficient based on collaborative quantization, and computing device for performing the method
M Kim, K Zhen, MS Lee, SK Beack, J Sung, TJ Lee, JS Choi
US Patent 11,488,613, 2022
12022
Audio signal encoding method and audio signal decoding method, and encoder and decoder performing the same
MS Lee, J Sung, M Kim, K Zhen
US Patent 11,276,413, 2022
12022
Hybrid supervised-unsupervised image topic visualization with convolutional neural network and LDA. arXiv
K Zhen, M Birla, D Crandall, B Zhang, J Qiu
12017
Max-margin transducer loss: Improving sequence-discriminative training using a large-margin learning strategy
RV Swaminathan, GP Strimel, A Rastrow, H Mallidi, K Zhen, HD Nguyen, ...
ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and …, 2024
2024
Apparatus and method for speech processing using a densely connected hybrid neural network
M Kim, MS Lee, SK Beack, J Sung, TJ Lee, JS Choi, K Zhen
US Patent 11,837,220, 2023
2023
Method and apparatus for processing audio signal
MS Lee, SK Beack, J Sung, TJ Lee, JS Choi, M Kim, K Zhen
US Patent 11,790,926, 2023
2023
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20