NCT06517225

Brief Summary

Children with speech sound disorder show diminished intelligibility in spoken communication and may thus be perceived as less capable than peers, with negative consequences for both socioemotional and socioeconomic outcomes. New technologies have the potential to transform interventions for speech sound disorder, but there is a lack of rigorous evidence to substantiate this promise. This research will meet a public health need by systematically evaluating the efficacy of visual-acoustic biofeedback intervention delivered in-person versus via telepractice. The objective of this study is to test the hypothesis that treatment incorporating visual-acoustic biofeedback can be delivered via telepractice without a significant loss of efficacy. Participants will be randomly assigned to receive identical treatment either via online telepractice or in the laboratory setting. The same software for visual-acoustic biofeedback, staRt, will be used in both conditions. Participants' progress in treatment will be evaluated based on blinded listeners' perceptual ratings of probes produced before and after treatment. Pre and post treatment evaluations will be carried out in person for all participants.

Trial Health

77
On Track

Trial Health Score

Automated assessment based on enrollment pace, timeline, and geographic reach

Enrollment
76

participants targeted

Target at P50-P75 for phase_2

Timeline
32mo left

Started Jul 2024

Typical duration for phase_2

Geographic Reach
1 country

3 active sites

Status
recruiting

Health score is calculated from publicly available data and should be used for screening purposes only.

Trial Relationships

Click on a node to explore related trials.

Study Timeline

Key milestones and dates

Study Progress41%
Jul 2024Dec 2028

Study Start

First participant enrolled

July 1, 2024

Completed
17 days until next milestone

First Submitted

Initial submission to the registry

July 18, 2024

Completed
6 days until next milestone

First Posted

Study publicly available on registry

July 24, 2024

Completed
3.9 years until next milestone

Primary Completion

Last participant's last visit for primary outcome

June 30, 2028

Expected
6 months until next milestone

Study Completion

Last participant's last visit for all outcomes

December 31, 2028

Last Updated

June 25, 2025

Status Verified

June 1, 2025

Enrollment Period

4 years

First QC Date

July 18, 2024

Last Update Submit

June 23, 2025

Conditions

Keywords

speecharticulationmotor development

Outcome Measures

Primary Outcomes (1)

  • Percentage of "Correct" Ratings by Blinded Untrained Listeners for /r/ Sounds Produced in Word Probes

    To assess generalization of treatment gains to untreated words, participants will be assessed with standard probes (30 words \[considered the primary target\], 20 syllables, and 10 sentences containing /r/ in various phonetic contexts). Stimuli in each probe will be presented individually in randomized order with blocking by stimulus type (word, syllable, sentence). Individual words will be isolated from the audio record of each word probe and presented in randomized order for binary rating (correct/incorrect) by 9 untrained listeners who are blind to treatment condition and time point, but will see the written representation of each target word. The proportion of "correct" ratings for each token will be used as the primary measure of perceptually rated accuracy.

    Before the initiation of treatment and again after the end of all treatment (10 weeks later)

Secondary Outcomes (1)

  • 1. Survey evaluating impacts of speech disorder on participants' social, emotional, and academic well-being.

    Before the initiation of treatment and again after the end of all treatment (10 weeks later)

Study Arms (2)

Telepractice delivery

EXPERIMENTAL

Participants will receive visual-acoustic biofeedback treatment from a clinician in a private, password-protected WebRTC room.

Behavioral: Visual-acoustic biofeedback

In-person delivery

ACTIVE COMPARATOR

Participants will receive visual-acoustic biofeedback treatment from a clinician in a private room in research space at one of the two clinical research sites.

Behavioral: Visual-acoustic biofeedback

Interventions

Behavioral: Visual-acoustic biofeedback In visual-acoustic biofeedback treatment, the elements of traditional treatment (auditory models and verbal descriptions of articulator placement) are enhanced with a dynamic display of the speech signal in the form of the real-time LPC (Linear Predictive Coding) spectrum. The web-based software staRt will be used for intervention delivery.

Also known as: staRt app
In-person deliveryTelepractice delivery

Eligibility Criteria

Age9 Years - 17 Years
Sexfemale
Healthy VolunteersNo
Age GroupsChild (0-17)

You may qualify if:

  • Must be between 9;0 and 17;11 (years;months) old at the time of enrollment.
  • Must speak English as the dominant or equally dominant language
  • Must have begun learning English by age 3, per parent report.
  • Must hear a rhotic dialect of English from at least one speaker in the home if the home language is English.
  • Must pass a pure-tone hearing screening.
  • Must pass a brief examination of oral structure and function.
  • Must exhibit less than thirty percent accuracy, based on trained listener ratings, on a probe list eliciting /r/ in various phonetic contexts at the word level.
  • Must demonstrate age-appropriate receptive and expressive language abilities on the Clinical Evaluation of Language Fundamentals-5 (CELF-5).
  • Must have access to a laptop or desktop computer for study sessions in the event of randomization to the telepractice condition.
  • Must have home wifi sufficient to support video calls in the event of randomization to the telepractice condition.

You may not qualify if:

  • Must not exhibit voice or fluency disorder of a severity judged likely to interfere with the ability to participate in study activities.
  • Must not currently have orthodontia that crosses the palate and cannot be removed.
  • Must not have history of permanent hearing loss.
  • Must not have an existing diagnosis of developmental disability such as cerebral palsy or Down Syndrome.
  • Must not have history of major brain injury, surgery, or stroke in the past year.
  • Must not have epilepsy with active seizure incidents with in the past 6 months.
  • Must not show clinically significant signs of apraxia of speech or dysarthria.

Contact the study team to confirm eligibility.

Sponsors & Collaborators

Study Sites (3)

Montclair State University

Bloomfield, New Jersey, 07003, United States

RECRUITING

New York University

New York, New York, 10012, United States

ACTIVE NOT RECRUITING

Syracuse University

Syracuse, New York, 13244, United States

RECRUITING

Related Publications (18)

  • Ayala SA, Eads A, Kabakoff H, Swartz MT, Shiller DM, Hill J, Hitchcock ER, Preston JL, McAllister T. Auditory and Somatosensory Development for Speech in Later Childhood. J Speech Lang Hear Res. 2023 Apr 12;66(4):1252-1273. doi: 10.1044/2022_JSLHR-22-00496. Epub 2023 Mar 17.

    PMID: 36930986BACKGROUND
  • Benway NR, Preston JL, Hitchcock E, Rose Y, Salekin A, Liang W, McAllister T. Reproducible Speech Research With the Artificial Intelligence-Ready PERCEPT Corpora. J Speech Lang Hear Res. 2023 Jun 20;66(6):1986-2009. doi: 10.1044/2023_JSLHR-22-00343. Epub 2023 Jun 15.

    PMID: 37319018BACKGROUND
  • Campbell H, Harel D, Hitchcock E, McAllister Byun T. Selecting an acoustic correlate for automated measurement of American English rhotic production in children. Int J Speech Lang Pathol. 2018 Nov;20(6):635-643. doi: 10.1080/17549507.2017.1359334. Epub 2017 Aug 10.

    PMID: 28795872BACKGROUND
  • Campbell H, McAllister Byun T. Deriving individualised /r/ targets from the acoustics of children's non-rhotic vowels. Clin Linguist Phon. 2018;32(1):70-87. doi: 10.1080/02699206.2017.1330898. Epub 2017 Jul 13.

    PMID: 28703653BACKGROUND
  • Harel D, Hitchcock ER, Szeredi D, Ortiz J, McAllister Byun T. Finding the experts in the crowd: Validity and reliability of crowdsourced measures of children's gradient speech contrasts. Clin Linguist Phon. 2017;31(1):104-117. doi: 10.3109/02699206.2016.1174306. Epub 2016 Jun 7.

    PMID: 27267258BACKGROUND
  • Hitchcock ER, Ochs LC, Swartz MT, Leece MC, Preston JL, McAllister T. Tutorial: Using Visual-Acoustic Biofeedback for Speech Sound Training. Am J Speech Lang Pathol. 2023 Jan 11;32(1):18-36. doi: 10.1044/2022_AJSLP-22-00142. Epub 2023 Jan 9.

    PMID: 36623212BACKGROUND
  • Hitchcock ER, Harel D, Byun TM. Social, Emotional, and Academic Impact of Residual Speech Errors in School-Aged Children: A Survey Study. Semin Speech Lang. 2015 Nov;36(4):283-94. doi: 10.1055/s-0035-1562911. Epub 2015 Oct 12.

    PMID: 26458203BACKGROUND
  • Hitchcock ER, Byun TM. Enhancing generalisation in biofeedback intervention using the challenge point framework: a case study. Clin Linguist Phon. 2015 Jan;29(1):59-75. doi: 10.3109/02699206.2014.956232. Epub 2014 Sep 12.

    PMID: 25216375BACKGROUND
  • McAllister Byun T. Efficacy of Visual-Acoustic Biofeedback Intervention for Residual Rhotic Errors: A Single-Subject Randomization Study. J Speech Lang Hear Res. 2017 May 24;60(5):1175-1193. doi: 10.1044/2016_JSLHR-S-16-0038.

    PMID: 28389677BACKGROUND
  • McAllister Byun T, Campbell H. Differential Effects of Visual-Acoustic Biofeedback Intervention for Residual Speech Errors. Front Hum Neurosci. 2016 Nov 11;10:567. doi: 10.3389/fnhum.2016.00567. eCollection 2016.

    PMID: 27891084BACKGROUND
  • McAllister Byun T, Halpin PF, Szeredi D. Online crowdsourcing for efficient rating of speech: a validation study. J Commun Disord. 2015 Jan-Feb;53:70-83. doi: 10.1016/j.jcomdis.2014.11.003. Epub 2014 Dec 15.

    PMID: 25578293BACKGROUND
  • Byun TM, Hitchcock ER, Swartz MT. Retroflex versus bunched in treatment for rhotic misarticulation: evidence from ultrasound biofeedback intervention. J Speech Lang Hear Res. 2014 Dec;57(6):2116-30. doi: 10.1044/2014_JSLHR-S-14-0034.

    PMID: 25088034BACKGROUND
  • Byun TM, Hitchcock ER. Investigating the use of traditional and spectral biofeedback approaches to intervention for /r/ misarticulation. Am J Speech Lang Pathol. 2012 Aug;21(3):207-21. doi: 10.1044/1058-0360(2012/11-0083). Epub 2012 Mar 21.

    PMID: 22442281BACKGROUND
  • Peterson L, Savarese C, Campbell T, Ma Z, Simpson KO, McAllister T. Telepractice Treatment of Residual Rhotic Errors Using App-Based Biofeedback: A Pilot Study. Lang Speech Hear Serv Sch. 2022 Apr 11;53(2):256-274. doi: 10.1044/2021_LSHSS-21-00084. Epub 2022 Jan 20.

    PMID: 35050705BACKGROUND
  • Preston JL, McAllister T, Phillips E, Boyce S, Tiede M, Kim JS, Whalen DH. Treatment for Residual Rhotic Errors With High- and Low-Frequency Ultrasound Visual Feedback: A Single-Case Experimental Design. J Speech Lang Hear Res. 2018 Aug 8;61(8):1875-1892. doi: 10.1044/2018_JSLHR-S-17-0441.

    PMID: 30073249BACKGROUND
  • Preston JL, Holliman-Lopez G, Leece MC. Do Participants Report Any Undesired Effects in Ultrasound Speech Therapy? Am J Speech Lang Pathol. 2018 May 3;27(2):813-818. doi: 10.1044/2017_AJSLP-17-0121.

    PMID: 29546269BACKGROUND
  • Preston JL, McAllister Byun T, Boyce SE, Hamilton S, Tiede M, Phillips E, Rivera-Campos A, Whalen DH. Ultrasound Images of the Tongue: A Tutorial for Assessment and Remediation of Speech Sound Errors. J Vis Exp. 2017 Jan 3;(119):55123. doi: 10.3791/55123.

    PMID: 28117824BACKGROUND
  • McAllister T, Preston JL, Hitchcock ER, Benway NR, Hill J. Protocol for visual-acoustic intervention with service delivery in-person and via telepractice (VISIT) non-inferiority trial for residual speech sound disorder. BMC Pediatr. 2025 Jan 27;25(1):65. doi: 10.1186/s12887-024-05364-z.

MeSH Terms

Conditions

Speech Sound DisorderSpeech

Condition Hierarchy (Ancestors)

Communication DisordersNeurodevelopmental DisordersMental DisordersVerbal BehaviorCommunicationBehavior

Study Design

Study Type
interventional
Phase
phase 2
Allocation
RANDOMIZED
Masking
SINGLE
Who Masked
OUTCOMES ASSESSOR
Masking Details
All perceptual ratings will be obtained from blinded, naive listeners recruited through online crowdsourcing. Following protocols refined in previous published research, binary rating responses will be aggregated over at least 9 unique listeners per token.
Purpose
TREATMENT
Intervention Model
PARALLEL
Model Details: All participants will complete an initial evaluation to determine eligibility and estimate the severity of speech sound disorder. Participants will be categorized into low and high severity groups and will be randomized with stratification by severity to the in-person or telepractice treatment condition. Participants will then complete 10 weeks of visual-acoustic biofeedback treatment in their randomly assigned condition. Treatment will be delivered individually by a speech-language pathologist and will elicit structured practice of /r/ in 20 semiweekly sessions.
Sponsor Type
OTHER
Responsible Party
SPONSOR

Study Record Dates

First Submitted

July 18, 2024

First Posted

July 24, 2024

Study Start

July 1, 2024

Primary Completion (Estimated)

June 30, 2028

Study Completion (Estimated)

December 31, 2028

Last Updated

June 25, 2025

Record last verified: 2025-06

Data Sharing

IPD Sharing
Will share

A manual of procedures and materials for treatment and clinician training will be released through the Open Science Framework (OSF). Complete de-identified data and analysis scripts on OSF will also be released when the study is completed. OSF as projects are completed

Shared Documents
STUDY PROTOCOL, ANALYTIC CODE
Time Frame
Data will be shared within 6 months of the study end date (7/31/28) and will be made available indefinitely.
Access Criteria
Publicly accessible
More information

Locations