NCT06164717

Brief Summary

This study meets the NIH definition of a clinical trial, but is not a treatment study. Instead, the goal of this study is to investigate how hearing ourselves speak affects the planning and execution of speech movements. The study investigates this topic in both typical speakers and in patients with Deep Brain Stimulation (DBS) implants. The main questions it aims to answer are:

  • Does the way we hear our own speech while talking affect future speech movements?
  • Can the speech of DBS patients reveal which brain areas are involved in adjusting speech movements? Participants will read words, sentences, or series of random syllables from a computer monitor while their speech is being recorded. For some participants, an electrode cap is also used to record brain activity during these tasks. And for DBS patients, the tasks will be performed with the stimulator ON and with the stimulator OFF.

Trial Health

77
On Track

Trial Health Score

Automated assessment based on enrollment pace, timeline, and geographic reach

Enrollment
507

participants targeted

Target at P75+ for not_applicable

Timeline
21mo left

Started Jan 2023

Longer than P75 for not_applicable

Geographic Reach
1 country

1 active site

Status
recruiting

Health score is calculated from publicly available data and should be used for screening purposes only.

Trial Relationships

Click on a node to explore related trials.

Study Timeline

Key milestones and dates

Study Progress66%
Jan 2023Dec 2027

Study Start

First participant enrolled

January 1, 2023

Completed
11 months until next milestone

First Submitted

Initial submission to the registry

November 22, 2023

Completed
19 days until next milestone

First Posted

Study publicly available on registry

December 11, 2023

Completed
4.1 years until next milestone

Primary Completion

Last participant's last visit for primary outcome

December 31, 2027

Expected
Same day until next milestone

Study Completion

Last participant's last visit for all outcomes

December 31, 2027

Last Updated

December 11, 2023

Status Verified

December 1, 2023

Enrollment Period

5 years

First QC Date

November 22, 2023

Last Update Submit

December 1, 2023

Conditions

Outcome Measures

Primary Outcomes (6)

  • Speech formant frequencies

    The frequencies of the subject's first two formants (F1, F2) for each test word will be measured from spectrographic displays with overlaid Linear Predictive Coding formant tracks.

    Measurements will be made only from acoustic recordings made during the test session (~1 hour).

  • Reach direction for arm movements

    Measuring initial reach direction for arm movements allows us to measure the direction that was planned before movement onset.

    Outcome measures will be made only during a single data recording session (~2 hours).

  • Amplitude of long-latency auditory evoked potentials (from EEG recordings) responses

    Amplitude of the N1 component (in microvolt) will be measured in response to both probe tones and to a subject's own speech onset.

    Measurements will be made only from electroencephalography (EEG) recordings made during the test session (~2 hours).

  • Local field potentials recorded by neural implants

    Local field potentials (LFPs) will be recorded by the PerceptPC DBS implants and used to measure changes in power spectrum density across different phases of the tasks. Additionally, LFPs will be used to conduct event-related analyses.

    Measurements will be made only from DBS implant recordings made during the test session (~1-2 hours).

  • Temporal measures of speech syllable sequence learning

    1\. Speech onset time (in milliseconds); 2. Average syllable duration (in milliseconds)

    Outcome measures will be made only during a single data recording session (~0.5 hours)

  • Accuracy during speech syllable sequence learning

    Sequence accuracy (in percent)

    Outcome measures will be made only during a single data recording session (~0.5 hours)

Study Arms (3)

Auditory feedback perturbation during speech

EXPERIMENTAL

The intervention consists of manipulating real-time auditory feedback during speech production. In our lab, such feedback perturbations can be implemented with either a stand-alone digital vocal processor (a device commonly used by singers and the music industry) or with software-based signal processing routines (see Equipment section for details). Note that the study does not investigate the efficacy of these hardware or software methods to induce behavioral change in subjects' speech. Rather, the study addresses basic experimental questions regarding the general role of auditory feedback in the central nervous system's control of articulatory speech movements.

Behavioral: Auditory feedback perturbation during speech

Visual feedback perturbation during reaching

EXPERIMENTAL

The intervention consists of manipulating real-time visual feedback during upper limb reaching movements. In our lab, such feedback perturbations can be implemented with a virtual reality display system.

Behavioral: Visual feedback perturbation during reaching

Deep brain stimulation

EXPERIMENTAL

This intervention consists of toggling the deep brain stimulation (DBS) implant ON/OFF prior to participation in the speech auditory-motor learning tasks and speech sequence learning tasks. This intervention can be implemented by the subject themselves as all patients have a hand- held controlled that they use to switch stimulation ON/OFF.

Other: DBS stimulation ON/OFF

Interventions

The intervention consists of manipulating real-time auditory feedback during speech production. In our lab, such feedback perturbations can be implemented with either a stand-alone digital vocal processor (a device commonly used by singers and the music industry) or with software-based signal processing routines (see Equipment section for details). Note that the study does not investigate the efficacy of these hardware or software methods to induce behavioral change in subjects' speech. Rather, the study addresses basic experimental questions regarding the general role of auditory feedback in the central nervous system's control of articulatory speech movements.

Auditory feedback perturbation during speech

The intervention consists of manipulating real-time visual feedback during upper limb reaching movements. In our lab, such feedback perturbations can be implemented with a virtual reality display system.

Visual feedback perturbation during reaching

Patients who have been previously implanted with a DBS stimulator for their clinical care will be tested in two speech motor learning tasks with the stimulation ON and with the stimulation OFF. Note that (1) patients routinely turn the stimulation OFF and back ON (examples are, for some patients, to sleep, to save battery, etc), and (2) we are not in any way evaluating the stimulator itself or its clinical effectiveness but only whether or not two forms of speech motor learning (adaptation to auditory feedback perturbation and speech sequence learning) are affected differently by having the stimulation ON or OFF. implant ON/OFF prior to participation in the speech auditory-motor learning tasks and speech sequence learning tasks. This intervention can be implemented by the subject themselves as all patients have a hand- held controlled that they use to switch stimulation ON/OFF.

Deep brain stimulation

Eligibility Criteria

Age4 Years+
Sexall
Healthy VolunteersYes
Age GroupsChild (0-17), Adult (18-64), Older Adult (65+)

You may qualify if:

  • native speaker of American English
  • no communication or neurological problems (except for subjects in the DBS group)
  • Hz pure tone hearing thresholds equal to or better than 25 dB HL for children and young adults and equal to or better than 35 dB HL for older adults
  • no medications that affect sensorimotor functioning (except for in the DBS group)
  • adult subjects: 18 years of age or older
  • typical children: 4;0 to 6;11 \[years;months\] or 10;0 to 12;11 \[years;months\])
  • \* scoring above the 20th percentile on the Peabody Picture Vocabulary Test (PPVT-5), Expressive Vocabulary Test (EVT-3), Goldman-Fristoe Test of Articulation (GFTA-3), and either Test of Early Language Development (TELD-4) or (for children age 8 or older) Clinical Evaluation of Language Fundamentals (CELF-5).
  • \* bilateral electrodes implanted in either the ventral intermediate nucleus of the thalamus (Vim; a target site for patients with essential tremor) or subthalamic nucleus (STN; a target site for patients with Parkinson's disease)

Contact the study team to confirm eligibility.

Sponsors & Collaborators

Study Sites (1)

University of Washington

Seattle, Washington, 98105, United States

RECRUITING

MeSH Terms

Conditions

Speech

Condition Hierarchy (Ancestors)

Verbal BehaviorCommunicationBehavior

Study Officials

  • Ludo Max, Ph.D.

    University of Washington

    PRINCIPAL INVESTIGATOR

Central Study Contacts

Study Design

Study Type
interventional
Phase
not applicable
Allocation
RANDOMIZED
Masking
NONE
Purpose
BASIC SCIENCE
Intervention Model
FACTORIAL
Sponsor Type
OTHER
Responsible Party
PRINCIPAL INVESTIGATOR
PI Title
Professor, Department of Speech and Hearing Sciences

Study Record Dates

First Submitted

November 22, 2023

First Posted

December 11, 2023

Study Start

January 1, 2023

Primary Completion (Estimated)

December 31, 2027

Study Completion (Estimated)

December 31, 2027

Last Updated

December 11, 2023

Record last verified: 2023-12

Locations