News

News

[04/2026] NeuroVoice Showcased at PolyU Alumni Day!

We are delighted to share that the SIOK LAB of the Department of Language Sciences and Technology at The Hong Kong Polytechnic University was invited to participate in the PolyU Alumni Day showcase. At the event, our laboratory presented NeuroVoice, a non-invasive intelligent speech brain-computer interface system developed entirely in-house with full proprietary intellectual property.

NeuroVoice is designed to help restore natural communication for individuals with speech impairments. It integrates a non-invasive wearable device with an intelligent analysis platform to monitor language-related brain activity, interpret neural signals, and translate them into speech and text in real time through an AI-powered model.

During the showcase, many volunteers enthusiastically experienced the NeuroVoice system, which received highly positive feedback from scholars, teachers, students, and alumni alike. We are sincerely grateful for the long-standing support from the University, the Faculty, and the Department, as well as for the tireless dedication of our team members, whose hard work made this presentation possible. We would also like to extend our special thanks to the Dean of the Faculty of Humanities, Dean Hu, for the continued attention and support given to our research.

The showcase also brought us many valuable suggestions from users, all of which our team greatly appreciates. We will continue refining the system and strive to launch an upgraded version of NeuroVoice in the near future, with the goal of better serving individuals with speech impairments.

[04/2026] UBSN Research Seminar: Prof. Yang Yang to Give a Talk at PolyU (17 April 2026)!

We are pleased to share that our lab will host Prof. Yang Yang (Associate Professor, Institute of Psychology, Chinese Academy of Sciences) for a UBSN Research Seminar under the UBSN Capacity Building Scheme: Inbound Scheme at The Hong Kong Polytechnic University on 17 April 2026 (Friday).

In this talk, “How the Brain Learns to Read and Write – And Why Some Struggle,” Prof. Yang will present recent findings on the developmental and evolutionary mechanisms of Chinese handwriting and reading based on functional and structural MRI studies. He will discuss how handwriting development is accompanied by focal functional specialization, increasing functional lateralization, and dynamic reconfiguration of cognitive, sensorimotor, and visual networks. He will also introduce cross-species evidence from humans and macaques that highlights the anatomical similarity and functional evolution of Exner’s area, a shared brain locus involved in both reading and writing.

In the second part of the talk, Prof. Yang will discuss the neural basis of writing deficits and their relationship to reading impairments in developmental dyslexia. He will present findings showing that children with dyslexia exhibit abnormalities in both regional activation and functional connectivity during handwriting, and that reduced activation in the left supplementary motor area and the right precuneus is linked to impairments in both handwriting and reading. He will also introduce a digital handwriting-based training program that significantly improves writing and reading skills, with transfer effects on attention abilities.

• Date: 17 April 2026 (Friday)
• Time: 11:00 am–12:00 noon
• Venue: Room PQ303, PolyU
• Registration: Please register via the QR code on the poster.

We warmly welcome students and colleagues to join us!

[03/2026] Congratulations to Gong’s paper is accepted by IEEE Sensors Journal!

S Gong, Y Li, Z Kang, B Chai, W Zeng, H Yan, Z Zhang, WT Siok, N Wang. LEREL: Lipschitz Continuity-Constrained Emotion Recognition Ensemble Learning For Electroencephalography [J]. IEEE Sensors Journal, 2026

Abstract:

Accurate and efficient recognition of emotional states is critical for human social functioning, and impairments in this ability are associated with significant psychosocial difficulties. While electroencephalography (EEG) offers a powerful tool for objective emotion detection, existing EEG-based Emotion Recognition (EER) methods suffer from three key limitations: (1) insufficient model stability, (2) limited accuracy in processing high-dimensional nonlinear EEG signals, and (3) poor robustness against intra-subject variability and signal noise. To address these challenges, we introduce Lipschitz continuity-constrained Ensemble Learning (LEL), a novel framework that enhances EEG-based emotion recognition by enforcing Lipschitz continuity constraints on Transformer-based attention mechanisms, spectral extraction, and normalization modules. This constraint ensures model stability, reduces sensitivity to signal variability and noise, and improves generalization capability. Additionally, LEL employs a learnable ensemble fusion strategy that optimally combines decisions from multiple heterogeneous classifiers to mitigate single-model bias and variance. Extensive experiments on three public benchmark datasets (EAV, FACED, and SEED) demonstrate superior performance, achieving average recognition accuracies of 74.25%, 81.19%, and 86.79%, respectively. The official implementation codes are available at https://github.com/NZWANG/LEL.

[03/2026] LST Research Seminar: Dr. Yuanning Li to Give a Talk at PolyU (24 March 2026)!

We are pleased to share that our lab will host Dr. Yuanning Li (Assistant Professor, School of Biomedical Engineering, ShanghaiTech University) for an LST Research Seminar at The Hong Kong Polytechnic University on 24 March 2026 (Tuesday).

We are pleased to share that our lab will host Dr. Yuanning Li (Assistant Professor, School of Biomedical Engineering, ShanghaiTech University) for an LST Research Seminar at The Hong Kong Polytechnic University on 24 March 2026 (Tuesday).

In this talk, “Neural coding, computational models and brain-computer interfaces for human languages,” Dr. Li will introduce recent computational efforts to understand and reconstruct speech perception and production using human intracranial electrophysiology recordings and AI models. He will also discuss converging representations between biological speech networks and deep neural network models, and how tailored deep learning models can enable speech brain–computer interfaces that synthesize speech directly from intracranial signals.

  • Date: 24 March 2026 (Tue)
  • Time: 15:00–16:00 (HKT)
  • VenueHHB106, Hung Hom Bay Campus, PolyU
  • Zoom: Meeting ID 925 2441 8249 | Password 557055 (or scan the QR code on the poster)

We warmly welcome students and colleagues to join us onsite or online!

[02/2026] NeuroVoice Featured on Times Higher Education (THE) !

We are delighted to share that NeuroVoice has been featured on Times Higher Education (THE) in a story on “Leading the way in AI and humanities research”

NeuroVoice is a brain–computer interface application designed to enhance communication for individuals with speech impairments. It integrates a wearable device that monitors language-related brain regions with an analysis platform for interpretation and real-time visualisation, and can decode neural activity to “translate” it into speech and text via an AI-based model.

[01/2026] Congratulations To Dr. WANG Nizhuan On His Invited Talk At IECBS-IECNS 2026 !

It is delighted to announce that Dr. WANG Nizhuan has been warmly invited by Prof. Woon-Man Kung to deliver an invited talk at The 5th International Electronic Conference on Brain Sciences & 1st International Electronic Conference on Neurosciences (IECBS-IECNS 2026), which will be held online on March 9–11, 2026.

During the conference, Dr. WANG Nizhuan will present a comprehensive analysis to experts, scholars, and colleagues worldwide, highlighting the current landscape, key challenges, and future directions of single-channel EEG-based brain-computer interfaces.
The talk title is: “Single-Channel EEG-Based Brain-Computer Interfaces: Current Landscape and Future Directions”. He looks forward to meeting everyone at IECBS-IECNS 2026.

For more information about the conference and the speaker session, please visit: https://sciforum.net/event/IECBS-IECNS2026?section=#event_speakers

[12/2025] Congratulations to Lei’s paper is accepted by Visual Computing for Industry, Biomedicine, and Art!

Lei Wang, Weiming Zeng, Kai Long, Hongyu Chen, Rongfeng Lan, Li Liu, Wai Ting Siok, Nizhuan Wang. Advances in Photoacoustic Imaging Reconstruction and Quantitative Analysis for Biomedical Applications [J]. Visual Computing for Industry, Biomedicine, and Art, 2025

Abstract:
Photoacoustic imaging (PAI), a modality that combines the high contrast of optical imaging with the deep penetration of ultrasound, is rapidly transitioning from preclinical research to clinical practice. However, its widespread clinical adoption faces challenges such as the inherent trade-off between penetration depth and spatial resolution, along with the demand for faster imaging speeds. This review comprehensively examines the fundamental principles of PAI, focusing on three primary implementations: photoacoustic computed tomography (PACT), photoacoustic microscopy (PAM), and photoacoustic endoscopy (PAE). It critically analyzes their respective advantages and limitations to provide insights into practical applications. The discussion then extends to recent advancements in image reconstruction and artifact suppression, where both conventional and deep learning (DL)-based approaches have been highlighted for their role in enhancing image quality and streamlining workflows. Furthermore, this work explores progress in quantitative PAI, particularly its ability to precisely measure hemoglobin concentration, oxygen saturation, and other physiological biomarkers. Finally, this review outlines emerging trends and future directions, underscoring the transformative potential of DL in shaping the clinical evolution of PAI.

[12/2025] Congratulations to Dr. WANG Nizhuan on His Election as Senior Associate Editor of Cognitive Neurodynamics

It is delighted to announce that Dr. Wang Nizhuan has been elected as Senior Associate Editor of Cognitive Neurodynamics (CODY), a prestigious hybrid journal published by Springer Nature.
Founded in 2007, Cognitive Neurodynamics has established itself as a key academic platform in related fields. It currently holds a latest impact factor of 3.9 and is ranked Q2 in the Journal Citation Reports (JCR). The journal focuses on cutting-edge research areas including cognitive neuroscience, brain-computer interfaces, and computational neuroscience, providing a vital forum for scholars worldwide to exchange innovative ideas and findings.For more information about the journal and its editorial board, please visit: https://link.springer.com/journal/11571/editorial-board.

[10/2025] Congratulations !

Dr. WANG Nizhuan has been invited to deliver a plenary address at the 2025 International Neural Regeneration Symposium (INRS2025), held from October 24-26, 2025. His presentation, titled “From Neural Mechanisms to Clinical Diagnosis: Decoding Brain Disorders via AI-powered Neuroimaging,” will showcase his pioneering research at the intersection of AI, neuroimaging and brain disorders.

[08/2025] One paper is accepted to MIND2025 (Oral) !

Yueyang Li, Shengyu Gong, Weiming Zeng, Nizhuan Wang, Wai Ting Siok. FreqDGT: Frequency-Adaptive Dynamic Graph Networks with Transformer for Cross-subject EEG Emotion Recognition. The 2025 International Conference on Machine Intelligence and Nature-InspireD Computing (MIND).

Abstract:
Electroencephalography (EEG) serves as a reliable and objective signal for emotion recognition in affective brain-computer interfaces, offering unique advantages through its high temporal resolution and ability to capture authentic emotional states that cannot be consciously controlled. However, cross-subject generalization remains a fundamental challenge due to individual variability, cognitive traits, and emotional responses. We propose FreqDGT, a frequency-adaptive dynamic graph transformer that systematically addresses these limitations through an integrated framework. FreqDGT introduces frequency-adaptive processing (FAP) to dynamically weight emotion-relevant frequency bands based on neuroscientific evidence, employs adaptive dynamic graph learning (ADGL) to learn input-specific brain connectivity patterns, and implements multi-scale temporal disentanglement network (MTDN) that combines hierarchical temporal transformers with adversarial feature disentanglement to capture both temporal dynamics and ensure cross-subject robustness. Comprehensive experiments demonstrate that FreqDGT significantly improves cross-subject emotion recognition accuracy, confirming the effectiveness of integrating frequency-adaptive, spatial-dynamic, and temporal-hierarchical modeling while ensuring robustness to individual differences. The code is available at https://github.com/NZWANG/FreqDGT.

[07/2025] Two papers is accepted to Neural Networks !

Wenhao Dong*, Yueyang Li*, Weiming Zeng, Lei Chen, Hongjie Yan, Wai Ting Siok, Nizhuan Wang. STARFormer: A Novel Spatio-Temporal Aggregation Reorganization Transformer of FMRI for Brain Disorder DiagnosisNeural Networks (2025): 107927.

Abstract:
Many existing methods that use functional magnetic resonance imaging (fMRI) to classify brain disorders, such as autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD), often overlook the integration of spatial and temporal dependencies of the blood oxygen level-dependent (BOLD) signals, ….

Hongyu Chen, Weiming Zeng, Chengcheng Chen, Luhui Cai, Fei Wang, Yuhu Shi, Lei Wang, Wei Zhang, Yueyang Li, Hongjie Yan, Wai Ting Siok, Nizhuan Wang. EEG emotion copilot: Optimizing lightweight LLMs for emotional EEG interpretation with assisted medical record generationNeural Networks (2025): 107848.

Abstract:
In the fields of affective computing (AC) and brain-computer interface (BCI), the analysis of physiological and behavioral signals to discern individual emotional states has emerged as a critical research frontier. While deep learning-based approaches have made notable strides in EEG emotion recognition, particularly in feature extraction and pattern recognition, significant challenges persist in achieving end-to-end emotion computation, including rapid processing, individual adaptation….

滚动至顶部