<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">
<article article-type="review-article" dtd-version="1.0" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">JNN</journal-id>
<journal-title-group>
<journal-title>Journal of Neuromonitoring &amp; Neurophysiology</journal-title><abbrev-journal-title>J Neuromonit Neurophysiol</abbrev-journal-title></journal-title-group>
<issn pub-type="ppub">2799-5496</issn>
<issn pub-type="epub">3058-5449</issn>
<publisher>
<publisher-name>Korean Intraoperative Neural Monitoring Society</publisher-name></publisher></journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.54441/jnn.2025.5.2.75</article-id>
<article-id pub-id-type="publisher-id">jnn-2025-5-2-75</article-id>
<article-categories>
<subj-group>
<subject>Review Article</subject></subj-group></article-categories>
<title-group>
<article-title>Artificial intelligence for intraoperative neuromonitoring: signal interpretation, risk prediction, and clinical translation</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid">http://orcid.org/0009-0007-2025-9705</contrib-id>
<name><surname>Lee</surname><given-names>Yong Seok</given-names></name>
<xref ref-type="aff" rid="af1-jnn-2025-5-2-75"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid">http://orcid.org/0000-0001-7560-1140</contrib-id>
<name><surname>Woo</surname><given-names>Seung Hoon</given-names></name>
<xref ref-type="corresp" rid="c1-jnn-2025-5-2-75"/>
<xref ref-type="aff" rid="af1-jnn-2025-5-2-75"><sup>1</sup></xref>
<xref ref-type="aff" rid="af2-jnn-2025-5-2-75"><sup>2</sup></xref>
</contrib>
<aff id="af1-jnn-2025-5-2-75">
<label>1</label>Department of Medical Laser, Dankook University, Cheonan, <country>Republic of Korea</country></aff>
<aff id="af2-jnn-2025-5-2-75">
<label>2</label>Department of Otorhinolaryngology-Head and Neck Surgery, Dankook University College of Medicine, Cheonan, <country>Republic of Korea</country></aff>
</contrib-group>
<author-notes>
<corresp id="c1-jnn-2025-5-2-75">Corresponding to Seung Hoon Woo E-mail. <email>lesaby@hanmail.net</email></corresp>
</author-notes>
<pub-date pub-type="ppub">
<month>11</month>
<year>2025</year></pub-date>
<pub-date pub-type="epub">
<day>30</day>
<month>11</month>
<year>2025</year></pub-date>
<volume>5</volume>
<issue>2</issue>
<fpage>75</fpage>
<lpage>87</lpage>
<history>
<date date-type="received">
<day>29</day>
<month>10</month>
<year>2025</year></date>
<date date-type="rev-recd">
<day>10</day>
<month>11</month>
<year>2025</year></date>
<date date-type="accepted">
<day>10</day>
<month>11</month>
<year>2025</year></date>
</history>
<permissions>
<copyright-statement>Copyright &#x000a9; 2025 Korean Intraoperative Neural Monitoring Society</copyright-statement>
<copyright-year>2025</copyright-year>
<license>
<license-p>Articles published in the JNN are open-access, distributed under the terms of the Creative Commons Attribution License (<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by-nc/4.0">http://creativecommons.org/licenses/by-nc/4.0</ext-link>), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p></license></permissions>
<abstract><p>This reviews how artificial intelligence and machine learning reshape intraoperative neuromonitoring for thyroid and head and neck surgery with emphasis on protecting the recurrent laryngeal nerve. We synthesize four methodological strands including end-to-end deep learning on electromyography, classical machine learning with engineered features, motor evoked potential analytics, and computer vision for nerve localization. We map inputs, model classes, and objectives, and compare recurrent laryngeal nerve palsy prediction pipelines that use intraoperative electromyography trend dynamics, registry-based clinical ensembles, and voice spectrogram-derived outcomes. For real-time safety, we contrast threshold-based alerts with machine learning detectors and hybrid systems, and we highlight interpretability, acquisition to alert latency, and robustness to artifacts. Evidence includes prospective evaluations within operating room workflows, yet gaps remain in external validation and generalization across sites. We outline deployment principles that include calibrated graded alerts, standardized visualization, and surgeon-in-the-loop operation aligned with Standard Protocol Items: Recommendations for Interventional Trials&#x02013;Artificial Intelligence (SPIRIT-AI), the Consolidated Standards of Reporting Trials&#x02013;Artificial Intelligence (CONSORT-AI), and Developmental and Exploratory Clinical Investigations of DEcision support systems driven by Artificial Intelligence (DECIDE-AI). Together these elements enable earlier and more reliable detection and risk stratification while preserving clinical transparency.</p></abstract>
<kwd-group>
<kwd>Intraoperative neurophysiological monitoring</kwd>
<kwd>Recurrent laryngeal nerve</kwd>
<kwd>Sensitivity and specificity</kwd>
<kwd>Deep learning</kwd>
<kwd>Thyroid gland</kwd>
</kwd-group>
</article-meta></front>
<body>
<sec sec-type="intro">
<title>Introduction</title>
<p>Injury to the recurrent laryngeal nerve (RLN) remains one of the most consequential complications of thyroid and head-and-neck surgery, leading to dysphonia, aspiration, and lasting limitations in social participation and quality of life. Over the past three decades, intraoperative neuromonitoring (IONM) has progressed from an optional adjunct to a widely adopted tool that complements visual identification with functional assessments of vagal and RLN conduction. Intermittent monitoring verifies nerve function at predefined stages of the operation: an initial vagal baseline, the first confirmation of the RLN, a reassessment after dissection, and a final vagal check at the end of the case. Continuous monitoring adds automatic periodic stimulation (APS) of the vagus nerve at a low frequency to provide near real-time trends in amplitude and latency, allowing recognition of evolving neuropraxia before a clear loss of signal (LOS) occurs. In 2018, the International Neural Monitoring Study Group (INMSG) issued evidence-based guidelines that standardized definitions, outlined troubleshooting pathways, and set forth a staging algorithm for bilateral procedures when early LOS is detected on the first side &#x0005b;<xref ref-type="bibr" rid="b1-jnn-2025-5-2-75">1</xref>&#x0005d;. These normative documents catalyzed outcome-oriented research on how trend dynamics relate to postoperative vocal fold mobility and how monitoring should inform real-time surgical decisions &#x0005b;<xref ref-type="bibr" rid="b2-jnn-2025-5-2-75">2</xref>&#x0005d;.</p>
<p>A second, equally powerful trend is the rise of high-frequency, long-duration perioperative data streams free-running laryngeal electromyography (EMG), evoked responses including motor evoked potentials (MEP), and increasingly synchronized surgical video &#x0005b;<xref ref-type="bibr" rid="b3-jnn-2025-5-2-75">3</xref>&#x0005d;. Manual interpretation of these data is cognitively demanding, prone to false positives from endotracheal tube malrotation, loss of electrode&#x02013;mucosa contact, electrocautery interference, or residual neuromuscular blockade, and marked by inter-operator variability. Against this backdrop, machine learning (ML) and deep learning provide pattern-recognition capabilities that denoise and classify EMG/MEP waveforms, detect pre-LOS deterioration earlier than static thresholds, and calibrate risk estimates to guide actions such as traction release or staging &#x0005b;<xref ref-type="bibr" rid="b3-jnn-2025-5-2-75">3</xref>-<xref ref-type="bibr" rid="b5-jnn-2025-5-2-75">5</xref>&#x0005d;. Complementary computer-vision models can identify the RLN and related anatomy in surgical video, supplying spatial context for signal-based alerts. We conducted a narrative review of peer-reviewed English-language studies published 2015&#x02013;2025 by searching PubMed, Embase, and Scopus with predefined terms (IONM, RLN, EMG, M EP, LOS, ML), using dual independent screening and data extraction. This review synthesizes the literature on artificial intelligence and machine learning (AI/ML) for IONM signal interpretation, surveys predictive models for RLN palsy, and proposes a practical roadmap for clinical integration aligned with contemporary AI trial-reporting guidance, including the Standard Protocol Items: Recommendations for Interventional Trials&#x02013;Artificial Intelligence (SPIRIT-AI), the Consolidated Standards of Reporting Trials&#x02013;Artificial Intelligence (CONSORT-AI), and Developmental and Exploratory Clinical Investigations of DEcision support systems driven by Artificial Intelligence (DECIDE-AI) &#x0005b;<xref ref-type="bibr" rid="b6-jnn-2025-5-2-75">6</xref>,<xref ref-type="bibr" rid="b7-jnn-2025-5-2-75">7</xref>&#x0005d;.</p>
</sec>
<sec>
<title>Background: Fundamentals of Intraoperative Neuromonitoring and Current Limitations</title>
<p>From a physiological and workflow perspective, IONM employs endotracheal surface electrodes to record compound muscle action potentials from the intrinsic laryngeal musculature during stimulation of the vagus nerve or the RLN. In intermittent monitoring, nerve function is confirmed at predefined junctures an initial vagal baseline, the first identification of the RLN, a reassessment after dissection, and a final vagal check at the end of the case. Continuous monitoring introduces APS of the vagus at approximately 1&#x02013;3 Hz and yields near-continuous trends in amplitude and latency &#x0005b;<xref ref-type="bibr" rid="b1-jnn-2025-5-2-75">1</xref>&#x0005d;. Within this framework, the INMSG defines LOS as an EMG amplitude below 100 &#x003bc;V under standardized stimulation; segmental, or type 1, patterns indicate focal conduction block, whereas global, or type 2, patterns suggest proximal or technical causes, distinctions with clear prognostic and management implications &#x0005b;<xref ref-type="bibr" rid="b8-jnn-2025-5-2-75">8</xref>,<xref ref-type="bibr" rid="b9-jnn-2025-5-2-75">9</xref>&#x0005d;.</p>
<p>Injury dynamics and the severe combined event (sCE) are now well characterized. Prospective studies using continuous neuromonitoring show that traction-related neuropraxia typically appears first as a progressive reduction in amplitude accompanied by an increase in latency before hard LOS. A composite criterion known as the sCE, defined by a reduction in amplitude of at least 50% together with an increase in latency of at least 10% relative to baseline, has been linked to early postoperative vocal fold immobility when corrective action is delayed, and recovery of the signal is associated with favorable early mobility &#x0005b;<xref ref-type="bibr" rid="b10-jnn-2025-5-2-75">10</xref>,<xref ref-type="bibr" rid="b11-jnn-2025-5-2-75">11</xref>&#x0005d;. These observations anchor both real-time troubleshooting and quantitative endpoints for predictive modeling &#x0005b;<xref ref-type="bibr" rid="b2-jnn-2025-5-2-75">2</xref>,<xref ref-type="bibr" rid="b12-jnn-2025-5-2-75">12</xref>&#x0005d;.</p>
<p>Key limitations persist despite standardization. Monitoring remains vulnerable to artifacts arising from endotracheal tube displacement or malrotation, pooled secretions, inadequate electrode&#x02013;mucosa contact, and interference from electrocautery, and it is further constrained by device heterogeneity in filtering and gain and by inconsistent thresholds across centers. Even with continuous neuromonitoring, subtle pre-LOS patterns may be overlooked and technical artifacts may be misclassified as neurogenic change, which argues for automated, robust, and interpretable analytics that operate at the cadence of APS and provide clear, actionable why-now explanations &#x0005b;<xref ref-type="bibr" rid="b3-jnn-2025-5-2-75">3</xref>,<xref ref-type="bibr" rid="b13-jnn-2025-5-2-75">13</xref>,<xref ref-type="bibr" rid="b14-jnn-2025-5-2-75">14</xref>&#x0005d;. Conventional IONM remains vulnerable to tube malposition, electrocautery artifacts, and cognitive load at the console. To motivate an augmented approach, <xref rid="f1-jnn-2025-5-2-75" ref-type="fig">Figure 1</xref> contrasts legacy monitoring, prevalent clinical failure modes, and an AI-enhanced workflow that integrates analytics, synchronized video, and graded alerts.</p>
</sec>
<sec>
<title>Artificial Intelligence and Machine Learning in Intraoperative Neuromonitoring Signal Interpretation</title>
<sec>
<title>1. End-to-end deep learning on electromyography</title>
<p>Regarding objectives and data, EMG-centric investigations typically aim to classify EMG morphologies, to detect pre-LOS trends in streaming signals, and to suppress artifacts. One representative end-to-end approach used a hybrid of convolutional and recurrent layers trained on free-running EMG segments collected during thyroidectomy and achieved cross-patient accuracies near or above 0.85 with sensitivities of 0.90 or higher for abnormal morphologies, and in carefully curated long streams reported sensitivity and specificity exceeding 0.97 under stringent artifact thresholds &#x0005b;<xref ref-type="bibr" rid="b15-jnn-2025-5-2-75">15</xref>&#x0005d;. Although sample sizes and labeling protocols vary, these experiments prove feasibility for automatic triage of long EMG sequences and lay groundwork for prospective always-on alerting &#x0005b;<xref ref-type="bibr" rid="b3-jnn-2025-5-2-75">3</xref>,<xref ref-type="bibr" rid="b16-jnn-2025-5-2-75">16</xref>&#x0005d;.</p>
<p>With respect to strengths and gaps, deep temporal models such as convolutional&#x02013;recurrent hybrids and emerging transformers can learn discriminative time and frequency signatures that surpass simple amplitude thresholds. Nonetheless, available datasets are modest in size, labeling is vulnerable to variability between raters, and external validation is limited. Only a small number of studies report end-to-end system latency in the operating room, a metric that is essential for establishing feasibility for intraoperative decision support &#x0005b;<xref ref-type="bibr" rid="b17-jnn-2025-5-2-75">17</xref>&#x0005d;.</p>
</sec>
<sec>
<title>2. Classical machine learning with engineered features</title>
<p>An alternative strategy engineers interpretable descriptors, amplitude/latency deltas, slopes, area-under-trend, spectral entropy, that are then fed to support vector machines, Random Forests, or gradient-boosting ensembles. In broader IONM contexts, classical ML has proven competitive on multi-class tasks under careful normalization and site-specific calibration, while offering transparent feature importance for clinician trust &#x0005b;<xref ref-type="bibr" rid="b16-jnn-2025-5-2-75">16</xref>&#x0005d;. Yet, as complexity and noise increase, purely feature-engineered systems may underperform deep learning unless regularly adapted to local device characteristics &#x0005b;<xref ref-type="bibr" rid="b18-jnn-2025-5-2-75">18</xref>,<xref ref-type="bibr" rid="b19-jnn-2025-5-2-75">19</xref>&#x0005d;.</p>
</sec>
<sec>
<title>3. Motor evoked potentials analytics as a transferable paradigm</title>
<p>Although most MEP analytics derive from neurosurgical and spinal IONM, their methodological contributions are directly transferable. In a 2024 multicenter study in Computers in Biology and Medicine, Boaro et al. &#x0005b;<xref ref-type="bibr" rid="b20-jnn-2025-5-2-75">20</xref>&#x0005d; trained five classifiers on an intraoperative MEP database assembled across six centers and compared model performance with human experts; ML attained expert-level muscle classification on held-out patients, illustrating that supervised models can scale to complex intraoperative signals with center-level heterogeneity. Follow-on bicentric work emphasized explainability, using feature attribution to highlight signal components driving classifications, an essential ingredient for surgeon acceptance &#x0005b;<xref ref-type="bibr" rid="b21-jnn-2025-5-2-75">21</xref>&#x0005d;.</p>
</sec>
<sec>
<title>4. Computer vision to localize the recurrent laryngeal nerve in surgical video</title>
<p>Computer-vision analysis provides the spatial context that signal-only systems lack. Gong et al. &#x0005b;<xref ref-type="bibr" rid="b4-jnn-2025-5-2-75">4</xref>&#x0005d; developed an end-to-end deep network that identifies and segments the RLN during open thyroidectomy. The model achieved quantitative segmentation performance sufficient to support augmented visualization. More recent endoscopic cohorts show similar feasibility for nerve recognition, which suggests that EMG-based alerts can be fused with anatomy-aware overlays to guide traction release or energy device use at the precise site of risk &#x0005b;<xref ref-type="bibr" rid="b4-jnn-2025-5-2-75">4</xref>&#x0005d;.</p>
<p>Taken together, evidence across EMG, MEP, and surgical video points to a consistent pattern. To consolidate Sections 3.1&#x02013;3.4, <xref rid="t1-jnn-2025-5-2-75" ref-type="table">Table 1</xref> contrasts inputs, objectives, representative models, strengths, and limitations across four IONM strategies. Deep Considered together, evidence across EMG, MEP, and surgical video points to a consistent pattern. Deep temporal models generally surpass static thresholding when they encounter complex morphologies. For clinical adoption, systems must provide calibrated uncertainty and clear explanations in the operating room, and strong cross-site generalization remains the principal barrier to translation. A practical near-term strategy is a hybrid workflow that preserves sCE and loss-of-signal rules for safety and interpretability while using ML to filter artifacts, prioritize risk, and detect earlier trend inflections &#x0005b;<xref ref-type="bibr" rid="b16-jnn-2025-5-2-75">16</xref>,<xref ref-type="bibr" rid="b22-jnn-2025-5-2-75">22</xref>&#x0005d;.</p>
</sec>
</sec>
<sec>
<title>Recurrent Laryngeal Nerve Palsy Prediction Models</title>
<sec>
<title>1. Inputs, architectures, and datasets</title>
<p>Predictors of postoperative RLN dysfunction fall into signal-anchored features and patient/procedural covariates &#x0005b;<xref ref-type="bibr" rid="b10-jnn-2025-5-2-75">10</xref>&#x0005d;. Signal-anchored features include LOS type and timing, end-of-case amplitudes (R2/V2), relative amplitude drop, latency rise, and intraoperative signal recovery (ISR) after LOS. Classical multivariable logistic regression and receiver operating characteristic-derived thresholds have dominated to date, though registry-scale studies have introduced ensembles (Super Learner, XGBoost) for composite outcomes &#x0005b;<xref ref-type="bibr" rid="b2-jnn-2025-5-2-75">2</xref>&#x0005d;. DL is beginning to learn directly from raw EMG and associated perioperative data streams but has not yet produced multicenter, prospective, in-loop evaluations in thyroid surgery &#x0005b;<xref ref-type="bibr" rid="b3-jnn-2025-5-2-75">3</xref>,<xref ref-type="bibr" rid="b23-jnn-2025-5-2-75">23</xref>&#x0005d;.</p>
</sec>
<sec>
<title>2. Signal-anchored prognostics with prospective evidence</title>
<p>Prospective evidence provides the foundation for most intraoperative decision rules. In a continuous monitoring cohort of 785 patients, Schneider et al. &#x0005b;<xref ref-type="bibr" rid="b2-jnn-2025-5-2-75">2</xref>,<xref ref-type="bibr" rid="b10-jnn-2025-5-2-75">10</xref>&#x0005d; described the dynamics of signal loss and recovery and linked type 1 and type 2 patterns to early vocal fold mobility. This study then defined actionable ISR thresholds, showing that amplitude recovery around the 50 percent level after signal loss was associated with normal early vocal fold function, whereas absent or minimal recovery was associated with transient palsy. These cutoffs, derived from receiver operating characteristic analyses, translate into practical go or no-go decisions for staging and supply labeled outcomes for training ML classifiers &#x0005b;<xref ref-type="bibr" rid="b2-jnn-2025-5-2-75">2</xref>,<xref ref-type="bibr" rid="b10-jnn-2025-5-2-75">10</xref>&#x0005d;.</p>
</sec>
<sec>
<title>3. Registry-based ensembles for complications (including recurrent laryngeal nerve injury)</title>
<p>Analyses of American College of Surgeons National Surgical Quality Improvement Program thyroidectomy&#x02013;targeted data indicate that ensemble learning provides modest gains over logistic regression for predicting postoperative complications, including RLN injury and cervical hematoma. The absence of waveform inputs and the post hoc character of registry modeling limit usefulness during the operation. These observations support the development of integrated systems that combine patient-level risk with streaming EMG to deliver timely, individualized estimates that can influence intraoperative decisions &#x0005b;<xref ref-type="bibr" rid="b3-jnn-2025-5-2-75">3</xref>,<xref ref-type="bibr" rid="b16-jnn-2025-5-2-75">16</xref>&#x0005d;.</p>
</sec>
<sec>
<title>4. Voice-centric outcomes beyond binary palsy</title>
<p>Beyond frank palsy, voice quality is a patient-salient endpoint. A 2022 study in Sensors trained a deep network on pre- and early postoperative voice spectrograms to predict 3-month GRBAS scores after thyroid surgery, achieving promising discrimination. Such work suggests a future in which intraoperative risk estimates are linked to longitudinal functional outcomes, enabling targeted rehabilitation or nerve-reinnervation strategies even when laryngoscopy is normal &#x0005b;<xref ref-type="bibr" rid="b24-jnn-2025-5-2-75">24</xref>&#x0005d;.</p>
<p>Regarding validation status and remaining gaps, most palsy-prediction studies are retrospective and single-center. Even when thresholds are derived prospectively from intraoperative signals, they often require recalibration across devices and anesthetic regimens. Only a small subset of models reports real-time metrics that are essential for intraoperative use, including time to alert, false-alarm burden per hour, and measurable decision impact &#x0005b;<xref ref-type="bibr" rid="b5-jnn-2025-5-2-75">5</xref>,<xref ref-type="bibr" rid="b6-jnn-2025-5-2-75">6</xref>&#x0005d;. Meaningful progress will require multicenter, surgeon-in-the-loop evaluations that adhere to contemporary AI trial-reporting standards &#x0005b;<xref ref-type="bibr" rid="b7-jnn-2025-5-2-75">7</xref>&#x0005d;.</p>
</sec>
</sec>
<sec>
<title>Automated Loss of Signal Detection and Real-Time Alerts</title>
<sec>
<title>1. Detection paradigms: thresholds, machine learning, and hybrids</title>
<p>Threshold-based rules remain the foundation of current practice. Amplitude and latency thresholds derived from physiology and from prospective cohorts, including definitions of LOS and the sCE, are simple to apply, easy to interpret, and generally agnostic to device design. They are also aligned with INMSG recommendations for troubleshooting and for staging bilateral procedures. Their limitations are vulnerability to artifacts and a lack of individualized risk calibration &#x0005b;<xref ref-type="bibr" rid="b1-jnn-2025-5-2-75">1</xref>,<xref ref-type="bibr" rid="b10-jnn-2025-5-2-75">10</xref>&#x0005d;.</p>
<p>ML&#x02013;based detectors offer a complementary pathway. Learned classifiers analyze windows of EMG and estimate the probability of irritation or impending LOS while incorporating methods for artifact suppression, and early studies report high discrimination for abnormal morphologies &#x0005b;<xref ref-type="bibr" rid="b18-jnn-2025-5-2-75">18</xref>&#x0005d;. In parallel, supervised pipelines for intraoperative MEP classification have reached expert-level accuracy across multiple centers, which supports the feasibility of robust real-time inference when datasets and preprocessing and postprocessing are carefully designed. What remains absent in thyroid surgery is a prospective and continuous deployment that operates in an always on mode with explicit measurement of latency and with analysis of its impact on intraoperative decisions &#x0005b;<xref ref-type="bibr" rid="b20-jnn-2025-5-2-75">20</xref>&#x0005d;.</p>
<p>A practical approach in the near term is a hybrid workflow that preserves established rules as a safety net and uses ML to increase sensitivity and to reject artifacts. Such systems can present confidence-gated alerts with graded urgency, for example, a caution level that signals an irritation pattern and a warning level that signals a high-risk trend consistent with the sCE &#x0005b;<xref ref-type="bibr" rid="b17-jnn-2025-5-2-75">17</xref>,<xref ref-type="bibr" rid="b25-jnn-2025-5-2-75">25</xref>&#x0005d;.</p>
</sec>
<sec>
<title>2. Interfaces, latency, and reliability</title>
<p>Real-time clinical value depends on the total latency from acquisition to alert, including the sampling interval for APS, typically 1 to 3 Hz, along with preprocessing, postprocessing, model inference, and the time required for display and acknowledgement. Systems intended for intraoperative use should add less than 1 second of computational and display delay beyond the sampling cadence, present clear reasons for each alert through trend overlays, salient time&#x02013;frequency views, or synchronized video frames, and incorporate structured troubleshooting for common artifacts such as tube rotation, inadequate contact, or excessive neuromuscular blockade &#x0005b;<xref ref-type="bibr" rid="b14-jnn-2025-5-2-75">14</xref>,<xref ref-type="bibr" rid="b26-jnn-2025-5-2-75">26</xref>&#x0005d;. Evidence from clinical waveform studies further underscores the importance of rigorous signal conditioning and amplitude normalization to ensure consistent interpretation across patients and cases &#x0005b;<xref ref-type="bibr" rid="b13-jnn-2025-5-2-75">13</xref>&#x0005d;. To translate the above design choices into practice, <xref rid="t2-jnn-2025-5-2-75" ref-type="table">Table 2</xref> contrasts threshold-based, ML-based, and hybrid alerting with respect to inputs, detection logic, interpretability, acquisition-to-alert latency, and artifact robustness.</p>
</sec>
<sec>
<title>3. Evidence base and adjacent developments</title>
<p>Guidelines from the INMSG, together with multicenter outcome studies, provide the strongest clinical foundation for real-time alerting, and meta-analytic work has evaluated the impact of IONM on outcomes at scale &#x0005b;<xref ref-type="bibr" rid="b3-jnn-2025-5-2-75">3</xref>&#x0005d;. In parallel, computer-vision methods can segment the RLN in open and endoscopic thyroidectomy, which makes it feasible to localize risk in space and to co-register EMG alerts with anatomy-aware overlays &#x0005b;<xref ref-type="bibr" rid="b3-jnn-2025-5-2-75">3</xref>,<xref ref-type="bibr" rid="b4-jnn-2025-5-2-75">4</xref>&#x0005d;.</p>
</sec>
<sec>
<title>4. Regulatory and safety perspectives</title>
<p>Although vendor materials describe device features, peer-reviewed analyses of AI/ML medical devices emphasize lifecycle evidence, generalizability, and post-deployment surveillance. Benjamens et al. &#x0005b;<xref ref-type="bibr" rid="b27-jnn-2025-5-2-75">27</xref>&#x0005d; cataloged US Food &amp; Drugs Administration-cleared AI/ML devices and underscored transparency. Fraser et al. &#x0005b;<xref ref-type="bibr" rid="b28-jnn-2025-5-2-75">28</xref>&#x0005d; synthesized regulatory expectations for health-software and AI, highlighting software lifecycle (IEC 62304), risk management (ISO 14971), usability (IEC 62366-1), and the need for change-control in adaptive models. These frameworks translate directly to AI-enabled IONM modules &#x0005b;<xref ref-type="bibr" rid="b28-jnn-2025-5-2-75">28</xref>&#x0005d;.</p>
</sec>
</sec>
<sec>
<title>Clinical Translation and Integration Roadmap</title>
<sec>
<title>1. Barriers to adoption</title>
<p>Data governance and privacy are foundational to clinical translation. Generalizable models require multicenter repositories of waveforms and video with harmonized labels and rich metadata that specify device characteristics, anesthetic regimen, and electrode placement. Building such resources depends on robust consent processes, rigorous de-identification, and institutional oversight that explicitly permits secondary use for algorithm development and external validation &#x0005b;<xref ref-type="bibr" rid="b27-jnn-2025-5-2-75">27</xref>&#x0005d;.</p>
<p>Regulatory approval demands a full software-asa-medical-device pathway. AI-enabled detectors of LOS and risk estimators fall within this category and must satisfy evidence standards across the product lifecycle. Peer-reviewed regulatory syntheses emphasize clear requirements and hazard analysis, thorough verification and validation, usability engineering, cybersecurity controls, and post-market surveillance, together with transparent documentation of data lineage, model versioning, and processes for change management after deployment &#x0005b;<xref ref-type="bibr" rid="b20-jnn-2025-5-2-75">20</xref>&#x0005d;.</p>
<p>Surgeon trust and workflow fit determine real-world adoption. Human-factors studies in surgical AI show that acceptance depends on calibrated alerts, explanations that can be understood at a glance, and reliable suppression of artifact-driven false positives. Alarm fatigue undermines confidence. For neuromonitoring, the interface should display the rationale for each alert, present confidence intervals, and link every alert tier to a concrete troubleshooting action that aligns with established guideline algorithms &#x0005b;<xref ref-type="bibr" rid="b18-jnn-2025-5-2-75">18</xref>&#x0005d;.</p>
</sec>
<sec>
<title>2. Validation requirements: from offline accuracy to in-the-loop impact</title>
<p>Prospective evaluation and reporting standards are essential for moving from retrospective accuracy to clinical utility. The CONSORT-AI extension defines reporting items for randomized trials of AI interventions, SPIRIT-AI sets protocol expectations, and DECIDE-AI outlines requirements for early live clinical evaluations with attention to human-AI interaction, error analysis, usability, and context of use &#x0005b;<xref ref-type="bibr" rid="b5-jnn-2025-5-2-75">5</xref>,<xref ref-type="bibr" rid="b6-jnn-2025-5-2-75">6</xref>&#x0005d;. For IONM, studies should report not only area under the curve (AUC), sensitivity, specificity, and calibration but also time to alert, false-alarm rate per hour, and the impact on surgical decisions, together with patient-centered outcomes that include short-term vocal fold mobility and voice quality and quality of life at 3 to 12 months &#x0005b;<xref ref-type="bibr" rid="b5-jnn-2025-5-2-75">5</xref>&#x0005d;.</p>
</sec>
<sec>
<title>3. User interface and alerting principles</title>
<p>Design priorities focus on graded alerts that are tied to explicit actions, clear explainability through salient EMG windows and amplitude and latency overlays referenced to baseline, and synchronized video that makes the anatomic locus of risk visible. Interfaces should also present uncertainty estimates and incorporate robust artifact detection, including recognition of tube rotation signatures, to limit false positives, while maintaining workflow-aware ergonomics with minimal additional hardware and a small footprint on existing consoles &#x0005b;<xref ref-type="bibr" rid="b13-jnn-2025-5-2-75">13</xref>&#x0005d;. Evidence from intraoperative signal analytics and voice outcome studies further indicates that consistent signal conditioning and standardized visualization improve interpretability and facilitate adoption &#x0005b;<xref ref-type="bibr" rid="b24-jnn-2025-5-2-75">24</xref>&#x0005d;.</p>
</sec>
<sec>
<title>4. Multidisciplinary collaboration</title>
<p>Effective translation requires a joint governance model that brings together endocrine and otolaryngology surgeons who provide clinical leadership and adjudicate labels, neuromonitoring specialists and anesthesiologists who define protocols, artifact taxonomies and APS parameters, data scientists and engineers who manage modeling, machine learning operations, and drift monitoring, human-factors experts who conduct usability testing, and regulatory teams who ensure compliance with software-as-a-medical-device requirements. Editorials and guidance across the AI trial literature emphasize multi-stakeholder design from protocol development through post-market surveillance &#x0005b;<xref ref-type="bibr" rid="b5-jnn-2025-5-2-75">5</xref>&#x0005d;.</p>
</sec>
<sec>
<title>5. Step-by-step roadmap</title>
<p>An effective implementation pathway begins by defining the clinical decision and the target operating points. Specify how the system should change behavior by setting explicit sCE probability thresholds within a defined time window and by recommending staging when the ISR probability remains below a prespecified level after LOS, and pre-register primary and secondary outcomes &#x0005b;<xref ref-type="bibr" rid="b2-jnn-2025-5-2-75">2</xref>&#x0005d;.</p>
<p>Standardization of acquisition and labeling is essential. Harmonize sampling rates, filtering, APS frequency, and event ontologies such as loss-of-signal types, sCEs, and artifact classes, and establish multirater adjudication with rigorous quality control &#x0005b;<xref ref-type="bibr" rid="b2-jnn-2025-5-2-75">2</xref>&#x0005d;.</p>
<p>Model development should follow rigorous ML practice by using cross-site splits with site-held-out testing. Report calibration curves and decision-curve analyses and quantify subgroup performance. For time-series models include explicit latency budgets, and for artifact robustness include adversarial tests &#x0005b;<xref ref-type="bibr" rid="b18-jnn-2025-5-2-75">18</xref>&#x0005d;.</p>
<p>Human-factors engineering is required throughout. Prototype the interface and alerts with surgeons and technologists, run simulated cases and cognitive walkthroughs, and iterate to minimize alert fatigue while maintaining adequate sensitivity.</p>
<p>Before clinical use, conduct shadow-mode evaluation by prospectively running the model in parallel with standard care without influencing decisions, and measure time to alert, false-alarm rate, disagreement with human interpretation, and system reliability including uptime and missed data &#x0005b;<xref ref-type="bibr" rid="b5-jnn-2025-5-2-75">5</xref>&#x0005d;.</p>
<p>Subsequently, undertake a controlled pilot that enables surgeon-in-the-loop use with predefined stopping rules and independent safety monitoring, and report methods and results in accordance with SPIRIT-AI and CONSORT-AI &#x0005b;<xref ref-type="bibr" rid="b5-jnn-2025-5-2-75">5</xref>,<xref ref-type="bibr" rid="b6-jnn-2025-5-2-75">6</xref>&#x0005d;.</p>
<p>Finally, plan scale-out and surveillance with post-market monitoring for performance drift, recalibration when devices or anesthesia change, equity checks across subgroups, and transparent periodic reporting that aligns with contemporary reviews of medical-AI deployment &#x0005b;<xref ref-type="bibr" rid="b28-jnn-2025-5-2-75">28</xref>&#x0005d;. <xref rid="f2-jnn-2025-5-2-75" ref-type="fig">Figure 2</xref> provides a unifying system view that we reference throughout this section when discussing governance, validation, user interface, and staged deployment.</p>
</sec>
</sec>
<sec sec-type="conclusions">
<title>Conclusion and Future Directions</title>
<p>The arc of evidence suggests that AI/ML can shift IONM from reactive thresholding to proactive, context-aware guidance. On the signal-processing axis, deep temporal models recognize pre-LOS morphologies more sensitively than rule-based heuristics; on the prognostics axis, signal-anchored thresholds calibrated in multicenter cohorts already inform staging decisions and offer strong supervision for ML &#x0005b;<xref ref-type="bibr" rid="b3-jnn-2025-5-2-75">3</xref>,<xref ref-type="bibr" rid="b10-jnn-2025-5-2-75">10</xref>&#x0005d;. On the spatial axis, computer-vision models can identify the RLN and adjacent anatomy in surgical video, creating the possibility of anatomy-aware overlays that co-register with EMG risk scores to pinpoint where mitigation is needed. While aggregate meta-analyses continue to debate the overall effect of IONM on palsy rates at the population level, the combination of standardized continuous IONM (c-IONM) trend criteria, prospective outcome correlates, and emerging AI pipelines provides a credible path to earlier warnings, fewer unnecessary alerts, and more consistent intraoperative decisions &#x0005b;<xref ref-type="bibr" rid="b2-jnn-2025-5-2-75">2</xref>&#x0005d;. The remaining hurdles are not purely technical: generalizable datasets, prospective in-loop evaluations with meaningful endpoints, human-factors-driven interfaces, and software as a medical device-compliant governance will determine whether AI-assisted IONM improves real-world outcomes &#x0005b;<xref ref-type="bibr" rid="b4-jnn-2025-5-2-75">4</xref>&#x0005d;.</p>
<sec>
<title>1. Priority directions for the next decade</title>
<sec>
<title>1) Standardize data and evaluation</title>
<p>Adopt shared schemas for EMG/MEP and synchronized video, including device metadata and artifact taxonomies; report real-time metrics (time-to-alert, false-alarm rate/hour) alongside AUC and calibration. Benchmarks should require external, site-held-out validation to stress generalization &#x0005b;<xref ref-type="bibr" rid="b1-jnn-2025-5-2-75">1</xref>,<xref ref-type="bibr" rid="b13-jnn-2025-5-2-75">13</xref>&#x0005d;.</p>
</sec>
<sec>
<title>2) Build open, multicenter annotated repositories</title>
<p>De-identified waveform repositories with synchronized stimulation logs and adjudicated event labels, LOS type, sCE onset/offset, artifact episodes, are prerequisites for robust training and comparison. Clinical series in c-IONM show that rich labels exist and can be operationalized &#x0005b;<xref ref-type="bibr" rid="b10-jnn-2025-5-2-75">10</xref>,<xref ref-type="bibr" rid="b20-jnn-2025-5-2-75">20</xref>&#x0005d;.</p>
</sec>
<sec>
<title>3) Fuse modalities and context</title>
<p>Combine EMG-based risk with anatomy-aware video segmentation to contextualize alerts at the precise dissection site; integrate patient factors to individualize thresholds &#x0005b;<xref ref-type="bibr" rid="b4-jnn-2025-5-2-75">4</xref>&#x0005d;. Initial feasibility in open and endoscopic thyroidectomy supports a near-term translational program for multimodal fusion &#x0005b;<xref ref-type="bibr" rid="b29-jnn-2025-5-2-75">29</xref>&#x0005d;.</p>
</sec>
<sec>
<title>4) Prospective, surgeon-in-the-loop trials with longitudinal endpoints</title>
<p>Move beyond offline accuracy to measure time-to-mitigation, staging decisions, RLN palsy incidence, and 3- to 12-month voice quality and quality-of-life outcomes; report per SPIRIT-AI, CONSORTAI, and DECIDE-AI &#x0005b;<xref ref-type="bibr" rid="b30-jnn-2025-5-2-75">30</xref>&#x0005d;.</p>
</sec>
<sec>
<title>5) Deployment engineering and governance</title>
<p>Engineer for sub-second inference beyond APS cadence; implement uncertainty calibration and artifact gating; and align development with evolving regulatory expectations for medical-AI devices (software lifecycle, risk management, usability, and change control) &#x0005b;<xref ref-type="bibr" rid="b27-jnn-2025-5-2-75">27</xref>&#x0005d;. Published regulatory reviews provide the necessary scaffolding for safe scale-out &#x0005b;<xref ref-type="bibr" rid="b28-jnn-2025-5-2-75">28</xref>,<xref ref-type="bibr" rid="b31-jnn-2025-5-2-75">31</xref>&#x0005d;.</p>
<p>If realized, these steps could transform IONM into a predictive safety layer that consistently recognizes danger earlier, explains its recommendations clearly, and measurably reduces RLN injury while preserving surgical efficiency. The opportunity is not merely to automate threshold application, but to elevate monitoring into a multimodal, human-centered system that helps teams make better decisions under uncertainty.</p>
</sec>
</sec>
</sec>
</body>
<back>
<fn-group>
<fn fn-type="financial-disclosure"><p><bold>Funding</bold></p>
<p>None.</p></fn>
<fn fn-type="conflict"><p><bold>Conflict of Interest</bold></p>
<p>Seung Hoon Woo is the Editor-in-Chief of the journal, but was not involved in the review process of this manuscript. Otherwise, there is no conflict of interest to declare.</p></fn>
<fn fn-type="other"><p><bold>Data Availability</bold></p>
<p>None.</p></fn>
<fn fn-type="participating-researchers"><p><bold>Author Contributions</bold></p>
<p>Conceptualization: YSL; Data curation: YSL; Formal analysis: YSL; Investigation: YSL; Methodology: YSL; Project administration: SHW; Software: YSL; Visualization: YSL; Writing–original draft: YSL; Writing–review &amp; editing: all authors.</p></fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="b1-jnn-2025-5-2-75">
<label>1</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Schneider</surname><given-names>R</given-names></name>
<name><surname>Randolph</surname><given-names>GW</given-names></name>
<name><surname>Dionigi</surname><given-names>G</given-names></name>
<name><surname>Wu</surname><given-names>CW</given-names></name>
<name><surname>Barczynski</surname><given-names>M</given-names></name>
<name><surname>Chiang</surname><given-names>FY</given-names></name>
<etal/>
</person-group>
<article-title>International neural monitoring study group guideline 2018 part I: staging bilateral thyroid surgery with monitoring loss of signal</article-title>
<source>Laryngoscope</source>
<year>2018</year>
<volume>128 Suppl 3</volume>
<fpage>S1</fpage>
<lpage>17</lpage>
<comment>doi: 10.1002/lary.27359</comment>
</element-citation></ref>
<ref id="b2-jnn-2025-5-2-75">
<label>2</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Schneider</surname><given-names>R</given-names></name>
<name><surname>Randolph</surname><given-names>G</given-names></name>
<name><surname>Dionigi</surname><given-names>G</given-names></name>
<name><surname>Barczynski</surname><given-names>M</given-names></name>
<name><surname>Chiang</surname><given-names>FY</given-names></name>
<name><surname>Wu</surname><given-names>CW</given-names></name>
<etal/>
</person-group>
<article-title>Prediction of postoperative vocal fold function after intraoperative recovery of loss of signal</article-title>
<source>Laryngoscope</source>
<year>2019</year>
<volume>129</volume>
<fpage>525</fpage>
<lpage>31</lpage>
<comment>doi: 10.1002/lary.27327</comment>
</element-citation></ref>
<ref id="b3-jnn-2025-5-2-75">
<label>3</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Staubitz</surname><given-names>JI</given-names></name>
<name><surname>Watzka</surname><given-names>F</given-names></name>
<name><surname>Poplawski</surname><given-names>A</given-names></name>
<name><surname>Riss</surname><given-names>P</given-names></name>
<name><surname>Clerici</surname><given-names>T</given-names></name>
<name><surname>Bergenfelz</surname><given-names>A</given-names></name>
<etal/>
</person-group>
<article-title>Effect of intraoperative nerve monitoring on postoperative vocal cord palsy rates after thyroidectomy: European multicentre registry-based study</article-title>
<source>BJS Open</source>
<year>2020</year>
<volume>4</volume>
<fpage>821</fpage>
<lpage>9</lpage>
<comment>doi: 10.1002/bjs5.50310</comment>
</element-citation></ref>
<ref id="b4-jnn-2025-5-2-75">
<label>4</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Gong</surname><given-names>J</given-names></name>
<name><surname>Holsinger</surname><given-names>FC</given-names></name>
<name><surname>Noel</surname><given-names>JE</given-names></name>
<name><surname>Mitani</surname><given-names>S</given-names></name>
<name><surname>Jopling</surname><given-names>J</given-names></name>
<name><surname>Bedi</surname><given-names>N</given-names></name>
<etal/>
</person-group>
<article-title>Using deep learning to identify the recurrent laryngeal nerve during thyroidectomy</article-title>
<source>Sci Rep</source>
<year>2021</year>
<volume>11</volume>
<fpage>14306</fpage>
<comment>doi: 10.1038/s41598-021-93202-y</comment>
</element-citation></ref>
<ref id="b5-jnn-2025-5-2-75">
<label>5</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Liu</surname><given-names>X</given-names></name>
<name><surname>Cruz Rivera</surname><given-names>S</given-names></name>
<name><surname>Moher</surname><given-names>D</given-names></name>
<name><surname>Calvert</surname><given-names>MJ</given-names></name>
<name><surname>Denniston AK; SPIRIT-AI and CONSORT-AI Working</surname><given-names>Group</given-names></name>
</person-group>
<article-title>Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORTAI extension</article-title>
<source>Nat Med</source>
<year>2020</year>
<volume>26</volume>
<fpage>1364</fpage>
<lpage>74</lpage>
<comment>doi: 10.1038/s41591-020-1034-x</comment>
</element-citation></ref>
<ref id="b6-jnn-2025-5-2-75">
<label>6</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Cruz Rivera</surname><given-names>S</given-names></name>
<name><surname>Liu</surname><given-names>X</given-names></name>
<name><surname>Chan</surname><given-names>AW</given-names></name>
<name><surname>Denniston</surname><given-names>AK</given-names></name>
<name><surname>Calvert</surname><given-names>MJ</given-names></name>
<collab>SPIRIT-AI and CONSORT-AI Working Group</collab>
<collab>SPIRIT-AI and CONSORT-AI Steering Group</collab>
<collab>SPIRIT-AI and CONSORT-AI Consensus Group</collab>
</person-group>
<article-title>Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension</article-title>
<source>Nat Med</source>
<year>2020</year>
<volume>26</volume>
<fpage>1351</fpage>
<lpage>63</lpage>
<comment>doi: 10.1038/s41591-020-1037-7</comment>
</element-citation></ref>
<ref id="b7-jnn-2025-5-2-75">
<label>7</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Vasey</surname><given-names>B</given-names></name>
<name><surname>Nagendran</surname><given-names>M</given-names></name>
<name><surname>Campbell</surname><given-names>B</given-names></name>
<name><surname>Clifton</surname><given-names>DA</given-names></name>
<name><surname>Collins</surname><given-names>GS</given-names></name>
<name><surname>Denaxas</surname><given-names>S</given-names></name>
<etal/>
</person-group>
<article-title>Reporting guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI</article-title>
<source>Nat Med</source>
<year>2022</year>
<volume>28</volume>
<fpage>924</fpage>
<lpage>33</lpage>
<comment>doi: 10.1038/s41591-022-01772-9</comment>
</element-citation></ref>
<ref id="b8-jnn-2025-5-2-75">
<label>8</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Schneider</surname><given-names>R</given-names></name>
<name><surname>Randolph</surname><given-names>GW</given-names></name>
<name><surname>Barczynski</surname><given-names>M</given-names></name>
<name><surname>Dionigi</surname><given-names>G</given-names></name>
<name><surname>Wu</surname><given-names>CW</given-names></name>
<name><surname>Chiang</surname><given-names>FY</given-names></name>
<etal/>
</person-group>
<article-title>Continuous intraoperative neural monitoring of the recurrent nerves in thyroid surgery: a quantum leap in technology</article-title>
<source>Gland Surg</source>
<year>2016</year>
<volume>5</volume>
<fpage>607</fpage>
<lpage>16</lpage>
<comment>doi: 10.21037/gs.2016.11.10</comment>
</element-citation></ref>
<ref id="b9-jnn-2025-5-2-75">
<label>9</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Dionigi</surname><given-names>G</given-names></name>
<name><surname>Chiang</surname><given-names>FY</given-names></name>
<name><surname>Hui</surname><given-names>S</given-names></name>
<name><surname>Wu</surname><given-names>CW</given-names></name>
<name><surname>Xiaoli</surname><given-names>L</given-names></name>
<name><surname>Ferrari</surname><given-names>CC</given-names></name>
<etal/>
</person-group>
<article-title>Continuous intraoperative neuromonitoring (C-IONM) technique with the automatic periodic stimulating (APS) accessory for conventional and endoscopic thyroid surgery</article-title>
<source>Surg Technol Int</source>
<year>2015</year>
<volume>26</volume>
<fpage>101</fpage>
<lpage>14</lpage>
</element-citation></ref>
<ref id="b10-jnn-2025-5-2-75">
<label>10</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Schneider</surname><given-names>R</given-names></name>
<name><surname>Sekulla</surname><given-names>C</given-names></name>
<name><surname>Machens</surname><given-names>A</given-names></name>
<name><surname>Lorenz</surname><given-names>K</given-names></name>
<name><surname>Thanh</surname><given-names>PN</given-names></name>
<name><surname>Dralle</surname><given-names>H</given-names></name>
</person-group>
<article-title>Dynamics of loss and recovery of the nerve monitoring signal during thyroidectomy predict early postoperative vocal fold function</article-title>
<source>Head Neck</source>
<year>2016</year>
<volume>38 Suppl 1</volume>
<fpage>E1144</fpage>
<lpage>51</lpage>
<comment>doi: 10.1002/hed.24175</comment>
</element-citation></ref>
<ref id="b11-jnn-2025-5-2-75">
<label>11</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Schneider</surname><given-names>R</given-names></name>
<name><surname>Randolph</surname><given-names>GW</given-names></name>
<name><surname>Sekulla</surname><given-names>C</given-names></name>
<name><surname>Phelan</surname><given-names>E</given-names></name>
<name><surname>Thanh</surname><given-names>PN</given-names></name>
<name><surname>Bucher</surname><given-names>M</given-names></name>
<etal/>
</person-group>
<article-title>Continuous intraoperative vagus nerve stimulation for identification of imminent recurrent laryngeal nerve injury</article-title>
<source>Head Neck</source>
<year>2013</year>
<volume>35</volume>
<fpage>1591</fpage>
<lpage>8</lpage>
<comment>doi: 10.1002/hed.23187</comment>
</element-citation></ref>
<ref id="b12-jnn-2025-5-2-75">
<label>12</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Brauckhoff</surname><given-names>K</given-names></name>
<name><surname>Aas</surname><given-names>T</given-names></name>
<name><surname>Biermann</surname><given-names>M</given-names></name>
<name><surname>Husby</surname><given-names>P</given-names></name>
</person-group>
<article-title>EMG changes during continuous intraoperative neuromonitoring with sustained recurrent laryngeal nerve traction in a porcine model</article-title>
<source>Langenbecks Arch Surg</source>
<year>2017</year>
<volume>402</volume>
<fpage>675</fpage>
<lpage>81</lpage>
<comment>doi: 10.1007/s00423-016-1419-y</comment>
</element-citation></ref>
<ref id="b13-jnn-2025-5-2-75">
<label>13</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Mazzone</surname><given-names>S</given-names></name>
<name><surname>Esposito</surname><given-names>A</given-names></name>
<name><surname>Giacomarra</surname><given-names>V</given-names></name>
</person-group>
<article-title>Continuous intraoperative nerve monitoring in thyroid surgery: can amplitude be a standardized parameter?</article-title>
<source>Front Endocrinol (Lausanne)</source>
<year>2021</year>
<volume>12</volume>
<fpage>714699</fpage>
<comment>doi: 10.3389/fendo.2021.714699</comment>
</element-citation></ref>
<ref id="b14-jnn-2025-5-2-75">
<label>14</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Wu</surname><given-names>CW</given-names></name>
<name><surname>Wang</surname><given-names>MH</given-names></name>
<name><surname>Chen</surname><given-names>CC</given-names></name>
<name><surname>Chen</surname><given-names>HC</given-names></name>
<name><surname>Chen</surname><given-names>HY</given-names></name>
<name><surname>Yu</surname><given-names>JY</given-names></name>
<etal/>
</person-group>
<article-title>Loss of signal in recurrent nerve neuromonitoring: causes and management</article-title>
<source>Gland Surg</source>
<year>2015</year>
<volume>4</volume>
<fpage>19</fpage>
<lpage>26</lpage>
<comment>doi: 10.3978/j.issn.2227-684X.2014.12.03</comment>
</element-citation></ref>
<ref id="b15-jnn-2025-5-2-75">
<label>15</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Zha</surname><given-names>X</given-names></name>
<name><surname>Wehbe</surname><given-names>L</given-names></name>
<name><surname>Sclabassi</surname><given-names>RJ</given-names></name>
<name><surname>Mace</surname><given-names>Z</given-names></name>
<name><surname>Liang</surname><given-names>YV</given-names></name>
<name><surname>Yu</surname><given-names>A</given-names></name>
<etal/>
</person-group>
<article-title>A deep learning model for automated classification of intraoperative continuous EMG</article-title>
<source>IEEE Trans Med Robot Bionics</source>
<year>2021</year>
<volume>3</volume>
<fpage>44</fpage>
<lpage>52</lpage>
<comment>doi: 10.1109/tmrb.2020.3048255</comment>
</element-citation></ref>
<ref id="b16-jnn-2025-5-2-75">
<label>16</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Park</surname><given-names>D</given-names></name>
<name><surname>Kim</surname><given-names>I</given-names></name>
</person-group>
<article-title>Application of machine learning in the field of intraoperative neurophysiological monitoring: a narrative review</article-title>
<source>Appl Sci</source>
<year>2022</year>
<volume>12</volume>
<fpage>7943</fpage>
<comment>doi: 10.3390/app12157943</comment>
</element-citation></ref>
<ref id="b17-jnn-2025-5-2-75">
<label>17</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Wilson</surname><given-names>JP</given-names><suffix>Jr</suffix></name>
<name><surname>Kumbhare</surname><given-names>D</given-names></name>
<name><surname>Ronkon</surname><given-names>C</given-names></name>
<name><surname>Guthikonda</surname><given-names>B</given-names></name>
<name><surname>Hoang</surname><given-names>S</given-names></name>
</person-group>
<article-title>Application of machine learning strategies to model the effects of sevoflurane on somatosensory-evoked potentials during spine surgery</article-title>
<source>Diagnostics (Basel)</source>
<year>2023</year>
<volume>13</volume>
<fpage>3389</fpage>
<comment>doi: 10.3390/diagnostics13213389</comment>
</element-citation></ref>
<ref id="b18-jnn-2025-5-2-75">
<label>18</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Wermelinger</surname><given-names>J</given-names></name>
<name><surname>Parduzi</surname><given-names>Q</given-names></name>
<name><surname>Sariyar</surname><given-names>M</given-names></name>
<name><surname>Raabe</surname><given-names>A</given-names></name>
<name><surname>Schneider</surname><given-names>UC</given-names></name>
<name><surname>Seidel</surname><given-names>K</given-names></name>
</person-group>
<article-title>Opportunities and challenges of supervised machine learning for the classification of motor evoked potentials according to muscles</article-title>
<source>BMC Med Inform Decis Mak</source>
<year>2023</year>
<volume>23</volume>
<fpage>198</fpage>
<comment>doi: 10.1186/s12911-023-02276-3</comment>
</element-citation></ref>
<ref id="b19-jnn-2025-5-2-75">
<label>19</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Kok</surname><given-names>CL</given-names></name>
<name><surname>Ho</surname><given-names>CK</given-names></name>
<name><surname>Tan</surname><given-names>FK</given-names></name>
<name><surname>Koh</surname><given-names>YY</given-names></name>
</person-group>
<article-title>Machine learning-based feature extraction and classification of EMG signals for intuitive prosthetic control</article-title>
<source>Appl Sci</source>
<year>2024</year>
<volume>14</volume>
<fpage>5784</fpage>
<comment>doi: 10.3390/app14135784</comment>
</element-citation></ref>
<ref id="b20-jnn-2025-5-2-75">
<label>20</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Boaro</surname><given-names>A</given-names></name>
<name><surname>Azzari</surname><given-names>A</given-names></name>
<name><surname>Basaldella</surname><given-names>F</given-names></name>
<name><surname>Nunes</surname><given-names>S</given-names></name>
<name><surname>Feletti</surname><given-names>A</given-names></name>
<name><surname>Bicego</surname><given-names>M</given-names></name>
<etal/>
</person-group>
<article-title>Machine learning allows expert level classification of intraoperative motor evoked potentials during neurosurgical procedures</article-title>
<source>Comput Biol Med</source>
<year>2024</year>
<volume>180</volume>
<fpage>109032</fpage>
<comment>doi: 10.1016/j.compbiomed.2024.109032</comment>
</element-citation></ref>
<ref id="b21-jnn-2025-5-2-75">
<label>21</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Parduzi</surname><given-names>Q</given-names></name>
<name><surname>Wermelinger</surname><given-names>J</given-names></name>
<name><surname>Koller</surname><given-names>SD</given-names></name>
<name><surname>Sariyar</surname><given-names>M</given-names></name>
<name><surname>Schneider</surname><given-names>U</given-names></name>
<name><surname>Raabe</surname><given-names>A</given-names></name>
<etal/>
</person-group>
<article-title>Explainable AI for intraoperative motor-evoked potential muscle classification in neurosurgery: bicentric retrospective study</article-title>
<source>J Med Internet Res</source>
<year>2025</year>
<volume>27</volume>
<fpage>e63937</fpage>
<comment>doi: 10.2196/63937</comment>
</element-citation></ref>
<ref id="b22-jnn-2025-5-2-75">
<label>22</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Oh</surname><given-names>MY</given-names></name>
<name><surname>Choi</surname><given-names>Y</given-names></name>
<name><surname>Jang</surname><given-names>T</given-names></name>
<name><surname>Choe</surname><given-names>EK</given-names></name>
<name><surname>Kong</surname><given-names>HJ</given-names></name>
<name><surname>Chai</surname><given-names>YJ</given-names></name>
</person-group>
<article-title>Enhancing recurrent laryngeal nerve localization during transoral endoscopic thyroid surgery using augmented reality: a proof-of-concept study</article-title>
<source>Ann Surg Treat Res</source>
<year>2025</year>
<volume>108</volume>
<fpage>135</fpage>
<lpage>42</lpage>
<comment>doi: 10.4174/astr.2025.108.3.135</comment>
</element-citation></ref>
<ref id="b23-jnn-2025-5-2-75">
<label>23</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Jung</surname><given-names>JY</given-names></name>
</person-group>
<article-title>Intraoperative nerve monitoring in thyroid surgery: a comprehensive review of technical principles, anesthetic considerations, and clinical applications</article-title>
<source>J Clin Med</source>
<year>2025</year>
<volume>14</volume>
<fpage>3259</fpage>
<comment>doi: 10.3390/jcm14093259</comment>
</element-citation></ref>
<ref id="b24-jnn-2025-5-2-75">
<label>24</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Lee</surname><given-names>JH</given-names></name>
<name><surname>Lee</surname><given-names>CY</given-names></name>
<name><surname>Eom</surname><given-names>JS</given-names></name>
<name><surname>Pak</surname><given-names>M</given-names></name>
<name><surname>Jeong</surname><given-names>HS</given-names></name>
<name><surname>Son</surname><given-names>HY</given-names></name>
</person-group>
<article-title>Predictions for three-month postoperative vocal recovery after thyroid surgery from spectrograms with deep neural network</article-title>
<source>Sensors (Basel)</source>
<year>2022</year>
<volume>22</volume>
<fpage>6387</fpage>
<comment>doi: 10.3390/s22176387</comment>
</element-citation></ref>
<ref id="b25-jnn-2025-5-2-75">
<label>25</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Stankovic</surname><given-names>P</given-names></name>
<name><surname>Wittlinger</surname><given-names>J</given-names></name>
<name><surname>Georgiew</surname><given-names>R</given-names></name>
<name><surname>Dominas</surname><given-names>N</given-names></name>
<name><surname>Hoch</surname><given-names>S</given-names></name>
<name><surname>Wilhelm</surname><given-names>T</given-names></name>
</person-group>
<article-title>Continuous intraoperative neuromonitoring (cIONM) in head and neck surgery: a review</article-title>
<source>HNO</source>
<year>2020</year>
<volume>68</volume>
<fpage>86</fpage>
<lpage>92</lpage>
<comment>doi: 10.1007/s00106-020-00824-1</comment>
</element-citation></ref>
<ref id="b26-jnn-2025-5-2-75">
<label>26</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Kim</surname><given-names>HY</given-names></name>
<name><surname>Tufano</surname><given-names>RP</given-names></name>
<name><surname>Randolph</surname><given-names>G</given-names></name>
<name><surname>Barczy&#x00144;ski</surname><given-names>M</given-names></name>
<name><surname>Wu</surname><given-names>CW</given-names></name>
<name><surname>Chiang</surname><given-names>FY</given-names></name>
<etal/>
</person-group>
<article-title>Impact of positional changes in neural monitoring endotracheal tube on amplitude and latency of electromyographic response in monitored thyroid surgery: results from the porcine experiment</article-title>
<source>Head Neck</source>
<year>2016</year>
<volume>38 Suppl 1</volume>
<fpage>E1004</fpage>
<lpage>8</lpage>
<comment>doi: 10.1002/hed.24145</comment>
</element-citation></ref>
<ref id="b27-jnn-2025-5-2-75">
<label>27</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Benjamens</surname><given-names>S</given-names></name>
<name><surname>Dhunnoo</surname><given-names>P</given-names></name>
<name><surname>Mesk&#x000f3;</surname><given-names>B</given-names></name>
</person-group>
<article-title>The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database</article-title>
<source>NPJ Digit Med</source>
<year>2020</year>
<volume>3</volume>
<fpage>118</fpage>
<comment>doi: 10.1038/s41746-020-00324-0</comment>
</element-citation></ref>
<ref id="b28-jnn-2025-5-2-75">
<label>28</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Fraser</surname><given-names>AG</given-names></name>
<name><surname>Biasin</surname><given-names>E</given-names></name>
<name><surname>Bijnens</surname><given-names>B</given-names></name>
<name><surname>Bruining</surname><given-names>N</given-names></name>
<name><surname>Caiani</surname><given-names>EG</given-names></name>
<name><surname>Cobbaert</surname><given-names>K</given-names></name>
<etal/>
</person-group>
<article-title>Artificial intelligence in medical device software and high-risk medical devices: a review of definitions, expert recommendations and regulatory initiatives</article-title>
<source>Expert Rev Med Devices</source>
<year>2023</year>
<volume>20</volume>
<fpage>467</fpage>
<lpage>91</lpage>
<comment>doi: 10.1080/17434440.2023.2184685</comment>
</element-citation></ref>
<ref id="b29-jnn-2025-5-2-75">
<label>29</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Nishiya</surname><given-names>Y</given-names></name>
<name><surname>Matsuura</surname><given-names>K</given-names></name>
<name><surname>Ogane</surname><given-names>T</given-names></name>
<name><surname>Hayashi</surname><given-names>K</given-names></name>
<name><surname>Kinebuchi</surname><given-names>Y</given-names></name>
<name><surname>Tanaka</surname><given-names>H</given-names></name>
<etal/>
</person-group>
<article-title>Anatomical recognition artificial intelligence for identifying the recurrent laryngeal nerve during endoscopic thyroid surgery: a single-center feasibility study</article-title>
<source>Laryngoscope Investig Otolaryngol</source>
<year>2024</year>
<volume>9</volume>
<fpage>e70049</fpage>
<comment>doi: 10.1002/lio2.70049</comment>
</element-citation></ref>
<ref id="b30-jnn-2025-5-2-75">
<label>30</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Vasey</surname><given-names>B</given-names></name>
<name><surname>Nagendran</surname><given-names>M</given-names></name>
<name><surname>Campbell</surname><given-names>B</given-names></name>
<name><surname>Clifton</surname><given-names>DA</given-names></name>
<name><surname>Collins</surname><given-names>GS</given-names></name>
<name><surname>Denaxas</surname><given-names>S</given-names></name>
<etal/>
</person-group>
<article-title>Reporting guideline for the early stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI</article-title>
<source>BMJ</source>
<year>2022</year>
<volume>377</volume>
<fpage>e070904</fpage>
<comment>doi: 10.1136/bmj-2022-070904</comment>
</element-citation></ref>
<ref id="b31-jnn-2025-5-2-75">
<label>31</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name><surname>Wiens</surname><given-names>J</given-names></name>
<name><surname>Saria</surname><given-names>S</given-names></name>
<name><surname>Sendak</surname><given-names>M</given-names></name>
<name><surname>Ghassemi</surname><given-names>M</given-names></name>
<name><surname>Liu</surname><given-names>VX</given-names></name>
<name><surname>Doshi-Velez</surname><given-names>F</given-names></name>
<etal/>
</person-group>
<article-title>Do no harm: a roadmap for responsible machine learning for health care</article-title>
<source>Nat Med</source>
<year>2019</year>
<volume>25</volume>
<fpage>1337</fpage>
<lpage>40</lpage>
<comment>doi: 10.1038/s41591-019-0548-6</comment>
</element-citation></ref></ref-list>
<sec sec-type="display-objects">
<title>Figures and Tables</title>
<fig id="f1-jnn-2025-5-2-75" position="float">
<label>Figure 1.</label><caption><p>From traditional monitoring to AI-enhanced IONM: a visual overview. Conventional console-based monitoring centered on EMG/MEP traces (left). Common clinical challenges that erode reliability, including tube malposition, electrocautery artifacts, ambiguous or evolving signals, and cognitive load (middle). AI-enhanced monitoring that couples analytics on trends, synchronized surgical video, and graded alerts to support earlier and more actionable decisions (right). APS, automatic periodic stimulation; EMG, electromyography; RLN, recurrent laryngeal nerve; MEP, motor evoked potential; AI, artificial intelligence.</p></caption>
<graphic xlink:href="jnn-2025-5-2-75f1.tif"/></fig>
<fig id="f2-jnn-2025-5-2-75" position="float">
<label>Figure 2.</label><caption><p>Architecture and workflow of AI-enabled IONM. Multimodal inputs (free-running EMG, evoked MEP, synchronized surgical video) undergo artifact suppression, amplitude/latency normalization, and cross-modal synchronization before learning-based inference (deep temporal models for pre-LOS detection, classical ML on engineered features, computer-vision-based RLN localization, multimodal fusion). Outputs include graded alerts with confidence, surgeon-facing trend overlays and video snapshots, and decision-support functions (staging recommendation, RLN palsy risk estimation) aligned with SPIRIT-AI/CONSORT-AI/DECIDE-AI documentation. AI, artificial intelligence; ML, machine learning; EMG, electromyography; MEP, motor evoked potential; LOS, loss of signal; RLN, recurrent laryngeal nerve; SPIRIT-AI, Standard Protocol Items: Recommendations for Interventional Trials–Artificial Intelligence; CONSORT-AI, Consolidated Standards of Reporting Trials–Artificial Intelligence; DECIDE-AI, Developmental and Exploratory Clinical Investigations of DEcision support systems driven by Artificial Intelligence; IONM, intraoperative neuromonitoring.</p></caption>
<graphic xlink:href="jnn-2025-5-2-75f2.tif"/></fig>

<table-wrap id="t1-jnn-2025-5-2-75" position="float">
<label>Table 1.</label>
<caption><p>Summary of AI/ML strategies applied to IONM signal interpretation</p></caption>
<table rules="groups" frame="hsides">
<thead><tr>
<th align="center" valign="middle">Method</th>
<th align="center" valign="middle">Input data</th>
<th align="center" valign="middle">Key objective</th>
<th align="center" valign="middle">Model types</th>
<th align="center" valign="middle">Strengths</th>
<th align="center" valign="middle">Limitations</th>
</tr></thead>
<tbody>
<tr>
<td valign="top" align="left">End-to-end deep learning on EMG</td>
<td valign="top" align="left">Free-running EMG time-series (multi-channel)</td>
<td valign="top" align="left">Detect pre-LOS patterns and classify EMG morphologies/trends</td>
<td valign="top" align="left">Deep temporal networks (e.g., convolutional&#x02013;recurrent hybrids; transformer-style temporal models)</td>
<td valign="top" align="left">Learns temporal&#x02013;frequency signatures beyond simple amplitude thresholds; enables earlier detection than rule-based thresholds</td>
<td valign="top" align="left">Modest dataset sizes; label subjectivity; limited external validation and generalization across sites/cases</td>
</tr>
<tr>
<td valign="top" align="left">Classical ML with engineered features</td>
<td valign="top" align="left">EMG trends summarized as handcrafted descriptors (amplitude/latency deltas, slopes, area-under-trend, spectral indices)</td>
<td valign="top" align="left">Multi-class event classification and artifact discrimination</td>
<td valign="top" align="left">Supervised learners using engineered features (e.g., SVM, Random Forests, gradient-boosting ensembles)</td>
<td valign="top" align="left">Feature-level interpretability; competitive performance with smaller datasets; supports probability calibration for clinician trust</td>
<td valign="top" align="left">Performance degrades as noise/complexity increase; relies on feature engineering and site-specific tuning</td>
</tr>
<tr>
<td valign="top" align="left">MEP analytics</td>
<td valign="top" align="left">Evoked MEP waveforms and trend features (amplitude, latency, trajectory)</td>
<td valign="top" align="left">Classification/risk stratification based on MEP responses; methodological template transferable to RLN contexts</td>
<td valign="top" align="left">Comparative evaluation of multiple supervised classifiers on intraoperative MEP datasets</td>
<td valign="top" align="left">Mature analytic endpoints and benchmarking practices; provides methods directly transferrable to IONM problems</td>
<td valign="top" align="left">Originally developed outside RLN EMG; requires adaptation/validation for target anatomy and workflows</td>
</tr>
<tr>
<td valign="top" align="left">Computer vision for RLN localization</td>
<td valign="top" align="left">Surgical video (open and endoscopic) synchronized with monitoring</td>
<td valign="top" align="left">Identify/segment RLN to provide anatomy-aware context for monitoring</td>
<td valign="top" align="left">End-to-end deep networks for localization/segmentation</td>
<td valign="top" align="left">Supplies spatial information absent from signal-only pipelines; supports augmented visualization and action at the site of risk</td>
<td valign="top" align="left">Small, heterogeneous datasets; cross-site generalization remains a principal barrier; prospective OR integration needed</td>
</tr>
</tbody></table>
<table-wrap-foot>
<fn><p>Each method utilizes different data types and architectures, with trade-offs between interpretability, latency, and robustness.</p>
<p>AI, artificial intelligence; ML, machine learning; IONM, intraoperative neuromonitoring; EMG, electromyography; MEP, motor evoked potential; RLN, recurrent laryngeal nerve; LOS, loss of signal; SVM, support vector machine; OR, operating room.</p></fn>
</table-wrap-foot>
</table-wrap>

<table-wrap id="t2-jnn-2025-5-2-75" position="float">
<label>Table 2.</label>
<caption><p>Comparison of real-time alerting strategies in IONM systems</p></caption>
<table rules="groups" frame="hsides">
<thead><tr>
<th align="center" valign="middle">Method</th>
<th align="center" valign="middle">Input signals</th>
<th align="center" valign="middle">Detection mechanism</th>
<th align="center" valign="middle">Interpretability</th>
<th align="center" valign="middle">Alert latency</th>
<th align="center" valign="middle">Artifact robustness</th>
</tr></thead>
<tbody>
<tr>
<td valign="top" align="left">Threshold-based rules</td>
<td valign="top" align="left">EMG trend measures used in standard LOS criteria</td>
<td valign="top" align="left">Fixed, predefined amplitude/latency thresholds and LOS rules</td>
<td valign="top" align="left">Explicit, rule-driven logic that aligns with current practice</td>
<td valign="top" align="left">Device-level triggers; Section 5.2 emphasizes measuring acquisition-to-alert latency</td>
<td valign="top" align="left">Noted vulnerability to artifacts and lack of individualized risk calibration (Section 5.1)</td>
</tr>
<tr>
<td valign="top" align="left">ML-based detection</td>
<td valign="top" align="left">Intraoperative EMG time-series/trend features</td>
<td valign="top" align="left">Supervised learning detectors trained on signal dynamics</td>
<td valign="top" align="left">Not rule-based; requires calibrated alerts and standardized visualization for consistent interpretation (Sections 5.1&#x02013;5.2)</td>
<td valign="top" align="left">Adds model inference; total latency must be assessed end-to-end (Section 5.2)</td>
<td valign="top" align="left">Artifact handling depends on training/preprocessing; prospective in-loop evaluation needed to quantify impact (Section 5.1)</td>
</tr>
<tr>
<td valign="top" align="left">Hybrid systems (rule+ML)</td>
<td valign="top" align="left">EMG trends as above; optionally co-registered with anatomy-aware overlays (adjacent developments)</td>
<td valign="top" align="left">Rule-based safety gating with ML prioritization/suppression to flag high-risk trends consistent with the severe combined event</td>
<td valign="top" align="left">Retains rule transparency while adding model-supported prioritization; standardized visualization advocated for consistency (Section 5.2)</td>
<td valign="top" align="left">Designed to preserve responsiveness while incorporating inference; latency considerations as in Section 5.2</td>
<td valign="top" align="left">Intended to reduce nuisance alerts via gating/prioritization; co-registration with video noted as adjacent development (Section 5.3)</td>
</tr>
</tbody></table>
<table-wrap-foot>
<fn><p>Comparative overview of intraoperative alerting strategies for EMG signal loss detection, contrasting rule-based thresholds with ML-based and hybrid approaches in terms of latency, reliability, and clinical usability.</p>
<p>IONM, intraoperative neuromonitoring; ML, machine learning; EMG, electromyography; LOS, loss of signal.</p></fn>
</table-wrap-foot>
</table-wrap>
</sec>
</back></article>