博士后 - 人工智能學(xué)院Satoshi Nakamura項(xiàng)目 (Ref.PR2026/062/01)
Position title: Postdoctoral Researcher (Speech & Language Processing)
Lab/Unit: Spoken Language Communication (SLC) Lab, School of Artificial Intelligence (SAI), The Chinese University of Hong Kong, Shenzhen (CUHK Shenzhen)
Location: Shenzhen, China
Appointment: Full time; 24 month initial term with possibility of renewal
Start date: Flexible (target window: by Dec. 2025)
About the lab
The SLC Lab advances speech and multilingual communication with a focus on simultaneous speech translation (SimulST), LLMbased S2T/S2ST, simultaneous policy learning (RL/DPO), streaming alignment (CIF/monotonic attention/Transducer), and multimodal LLMs. We work within CUHK Shenzhen’s rapidly growing School of Artificial Intelligence (SAI) and collaborate with international partners, supported by modern GPU resources and large scale multilingual corpora.
Research themes (illustrative)
1. Simultaneous Speech Translation (SimulST): quality–latency tradeoffs; read–write/simultaneous policies via RL/DPO; dynamic prompts and timing control; evaluation with AL/ATD.
2. Streaming ASR/ST & Alignment: RNNT/Transducer, monotonic attention, CIF, segmentation; codeswitching and lowlatency decoding.
3. SpeechtoSpeech Translation (S2ST) & TTS/Voice: neural codecs, expressive prosody, crosslingual synthesis/editing.
4. Multimodal & LLMbased Speech/NLP: instructiontuned LLMs, retrievalaugmented speech translation, safety and evaluation for spoken LLMs.
5. Robustness & LowResource: noise/reverberation/accent robustness, data selection/augmentation, privacy aware modeling.
6. Machine Speech Chain: joint ASR?TTS modeling and data bootstrapping.
Responsibilities
1. Lead original research aligned with the themes above; formulate hypotheses and design rigorous experiments/ablations.
2. Build reproducible pipelines (PyTorch/JAX; Hugging Face; ESPnet/Fairseq/NeMo/SpeechBrain).
3. Author and present papers at ACL/EMNLP/AAAI/ICLR/ICASSP/INTERSPEECH/TASLP.
4. Mentor graduate/undergraduate students and coordinate collaborations within CUHK Shenzhen and with external partners.
5. Contribute to grant proposals, datasets, and open source/community releases as appropriate.
Minimum qualifications
1. Ph.D. in EE/CS/Computational Linguistics or related field (by start date).
2. Demonstrated excellence via publications at top tier AI conferences in speech/NLP/ML.
3. Strong programming skills (Python) and experience with modern deep learning frameworks.
4. Excellent scientific communication in English.
Desired qualifications
1. Experience with SimulST/read–write policies, RL/DPO/RLHF, CIF/CTC/monotonic attention, multilingual and lowresource speech, evaluation (COMET, AL/ATD, WER/TER), robustness/accent/noise, prosody/voice, machine speech chain, and largescale training.
2. Project leadership and mentoring experience; familiarity with MLOps and reproducibility practices.
Compensation and support
1. Competitive salary commensurate with experience per CUHK Shenzhen policies.
2. Benefits according to university policy; conference travel support; access to internal compute/GPU clusters.
How to apply
Send a single PDF to [email protected] containing:
1. CV (with publications)
2. Research statement (1–2 pages)
3. Up to three representative publications
4. Names and emails of 2–3 referees
Application review begins 15 Oct. 2025 and will continue until the position is filled.
For inquiries, please contact [email protected].
Inclusive Environment
CUHK-Shenzhen and the SAI/SLC community value diversity, equity, and inclusion. All qualified applicants are encouraged to apply.
Additional notes (as applicable)
1. Work mode: Onsite.
2. Visa sponsorship: May be available in accordance with university policies and local regulations.
3. Security/export control: Final offers may be subject to relevant compliance checks.
For details, please see below. https://ahalabs.wordpress.com/2025/09/20/post/
