NCT06969521

Brief Summary

The goal of this clinical trial is to determine whether perceptual training enhances speech perception and production outcomes in children with Residual Speech Sound Disorders (RSSD). The main questions it aims to answer are: Does pre-treatment speech production accuracy predict treatment response? Does perceptual acuity influence the effectiveness of perception-first versus production-first interventions? Researchers will compare TAU+Perception-first and TAU-first treatment conditions to see if the order of intervention affects speech improvement outcomes, particularly based on participants' initial perception and production accuracy. Participants will: Complete pre-treatment evaluations to assess /r/ production and speech perception. Be grouped into high or low production and perception accuracy categories based on established thresholds. Be randomly assigned (using a blocked randomization procedure) to one of two treatment arms via telepractice. Participate in the assigned treatment condition designed to target speech sound accuracy. Randomization is stratified to ensure treatment groups are balanced based on pre-treatment severity in both the perception and production domains.

Trial Health

77
On Track

Trial Health Score

Automated assessment based on enrollment pace, timeline, and geographic reach

Enrollment
60

participants targeted

Target at P25-P50 for not_applicable

Timeline
14mo left

Started May 2025

Typical duration for not_applicable

Geographic Reach
1 country

1 active site

Status
recruiting

Health score is calculated from publicly available data and should be used for screening purposes only.

Trial Relationships

Click on a node to explore related trials.

Study Timeline

Key milestones and dates

Study Progress46%
May 2025Jul 2027

First Submitted

Initial submission to the registry

April 22, 2025

Completed
18 days until next milestone

Study Start

First participant enrolled

May 10, 2025

Completed
4 days until next milestone

First Posted

Study publicly available on registry

May 14, 2025

Completed
2.1 years until next milestone

Primary Completion

Last participant's last visit for primary outcome

July 1, 2027

Expected
Same day until next milestone

Study Completion

Last participant's last visit for all outcomes

July 1, 2027

Last Updated

May 29, 2025

Status Verified

May 1, 2025

Enrollment Period

2.1 years

First QC Date

April 22, 2025

Last Update Submit

May 22, 2025

Conditions

Keywords

speech sound disorderauditory perceptionarticulation

Outcome Measures

Primary Outcomes (1)

  • Change in perceptually rated accuracy of /r/

    To assess generalization of treatment gains to untreated words, participants will be assessed with standard probes containing 45 syllables, 50 words and 5 sentences with rhotic targets in various phonetic contexts. Stimuli in each probe will be presented individually in randomized order with blocking by stimulus type (word, syllable, sentence). No auditory models will be provided; for children with reading difficulty, semantic cues will be provided to elicit the intended word. Individual words will be isolated from the audio record of each word probe and presented in randomized order for binary rating (correct/ incorrect) by 4 trained listeners who are blind to treatment condition and time point (but will see the written representation of each target word). We will use the proportion of "correct" ratings for each token as our primary measure of perceptually rated accuracy from which we will fit a multilevel model as the primary outcome variable.

    The timepoints for comparison will be from baseline to after both groups have completed 4 weeks of VAB and when both groups have completed the target 12 weeks of treatment (all types interventions: VAB, Perception training and no treatment).

Secondary Outcomes (3)

  • Socio-emotional well-being

    Baseline and after all interventions are completed (target time frame: 12 weeks)

  • Percent accuracy pooled across Identification Perception task

    The timepoints for comparison will be from baseline to after both groups have completed 4 weeks of VAB and when both groups have completed the target 12 weeks of treatment (all types interventions: VAB, Perception training and no treatment).

  • Percent accuracy pooled across Category Goodness Perception task

    The timepoints for comparison will be from baseline to after both groups have completed 4 weeks of VAB and when both groups have completed the target 12 weeks of treatment (all types interventions: VAB, Perception training and no treatment).

Study Arms (3)

ORDER: Visual acoustic biofeedback training

ACTIVE COMPARATOR

The investigators will use the following approach, adopted successfully for the RCT in the previous funding cycle: (1) Participants will be randomized after providing informed consent, meeting eligibility requirements, and completing the tasks and clinician-rated baselines that determine response group (High, Low). (2) For each perception accuracy group, the statistician will develop 2 batches of 10 concealed envelopes for assignment, one for high production accuracy participants and one for low production accuracy participants. Each will contain 10 participant assignments in random order: 5 TAU+ Perception-first, 5 TAU-first. Thus, once the investigators have recruited the first 10 participants for one subgroup (e.g., Low Perceptual Accuracy, Low Production Accuracy), another batch of 10 envelopes will be generated to allocate the next 10 children recruited in that subgroup.

Behavioral: Visual acoustic biofeedback: ORDER

ORDER: Perceptual training

EXPERIMENTAL

The investigators will use the following approach, adopted successfully for the RCT in the previous funding cycle: (1) Participants will be randomized after providing informed consent, meeting eligibility requirements, and completing the tasks and clinician-rated baselines that determine response group (High, Low). (2) For each perception accuracy group, the statistician will develop 2 batches of 10 concealed envelopes for assignment, one for high production accuracy participants and one for low production accuracy participants. Each will contain 10 participant assignments in random order: 5 TAU+ Perception-first, 5 TAU-first. Thus, once we have recruited the first 10 participants for one subgroup (e.g., Low Perceptual Accuracy, Low Production Accuracy), another batch of 10 envelopes will be generated to allocate the next 10 children recruited in that subgroup.

Behavioral: Perception Training: ORDER

ORDER: No treatment

NO INTERVENTION

No treatment, 4-week period of no treatment

Interventions

In visual-acoustic biofeedback treatment, elements of traditional articulation treatment are used, including auditory models, verbal descriptions of correct articulator placement, cues for repetitive motor practice via images and diagrams of the vocal tract as visual aids. These strategies are supplemented with a dynamic display of the speech signal in the form of the real-time LPC (Linear Predictive Coding) spectrum (Sona-Match module of PENTAX Sona- Speech software). Because correct vs incorrect productions of /r/ contrast acoustically in the frequency of the third formant (F3), participants will be cued to make their real-time LPC spectrum match a visual target characterized by a low F3 frequency. They will be encouraged to attend to the visual display while adjusting the placement of their articulators and observing how those adjustments impact F3. Knowledge of performance feedback will typically involve reference to the location of the third peak on the visual display.

ORDER: Visual acoustic biofeedback training

Description: Perceptual training involves self-paced presentation of auditory stimuli via a computerized software program (Gorilla). Stimuli are organized into three separate tasks. In tasks 1 and 3, which train category goodness judgment, participants will hear 75 naturally produced speech tokens containing /r/ from various speakers, with a balance of correct and incorrect productions. They will classify each /r/ as correct or incorrect and receive feedback on the accuracy of their classification. Tasks 1 and 3 differ in that task 1 will feature a subset of items designed to provide focused practice on a specific context (e.g., initial /r/ as in red; /r/ as syllable nucleus as in sir), with increasing difficulty over time, whereas task 3 will feature randomly selected items representing all contexts and difficulty levels. In task 2, participants will hear 75 items drawn from the synthetic rake-wake continuum used in the identification task administered at baseline, but they will recei

ORDER: Perceptual training

Eligibility Criteria

Age8 Years - 17 Years
Sexall
Healthy VolunteersNo
Age GroupsChild (0-17)

You may qualify if:

  • Must be between 8;0 and 17;11 years of age at the time of enrollment. Must speak English as the dominant language (i.e., must have begun learning English by age 2, per parent report).
  • Must speak a rhotic dialect of English. Must pass a pure-tone hearing screening at 20dB hearing level. Must pass a brief examination of oral structure and function. Must exhibit less than 30% accuracy, based on consensus across 2 trained listeners, on a probe list eliciting rhotics in various phonetic contexts at the word level.
  • Must exhibit no more than 3 sounds other than /r/ in error on the Goldman-Fristoe Test of Articulation-3 (GFTA-3).

You may not qualify if:

  • Must not receive a T score more than 1.3 SD below the mean on the Wechsler Abbreviated Scale of Intelligence-2 (WASI-2) Matrix Reasoning Must not receive a scaled score of 7 or higher on the Recalling Sentences and Formulated Sentences subtests of the Clinical Evaluation of Language Fundamentals-5 (CELF-5).
  • Must not have an existing diagnosis of developmental disability or major neurobehavioral syndrome such as cerebral palsy, Down Syndrome, or Autism Spectrum Disorder

Contact the study team to confirm eligibility.

Sponsors & Collaborators

Study Sites (1)

Montclair State University

Montclair, New Jersey, 07403, United States

RECRUITING

MeSH Terms

Conditions

Speech Sound Disorder

Condition Hierarchy (Ancestors)

Communication DisordersNeurodevelopmental DisordersMental Disorders

Study Officials

  • Elaine Hitchcock, PhD

    Montclair State University

    PRINCIPAL INVESTIGATOR

Central Study Contacts

Elaine R. Hitchcock, PhD

CONTACT

Study Design

Study Type
interventional
Phase
not applicable
Allocation
RANDOMIZED
Masking
SINGLE
Who Masked
OUTCOMES ASSESSOR
Masking Details
All perceptual ratings will be obtained from blinded, skilled clinician listeners recruited and trained in previous studies. Following protocols refined in previous published research, binary rating responses (1=correct; 0=incorrect) will be aggregated over at least 4 unique listeners per token.
Purpose
TREATMENT
Intervention Model
CROSSOVER
Model Details: Children with RSSD may vary in pre-treatment speech production severity, and the extent to which they can approximate /r/ may be an important indicator of subsequent treatment response. In addition, perceptual acuity may influence how participants respond to perception and/or production treatment. Therefore, a blocked randomization procedure will be used to protect against a situation where treatment groups are unbalanced with respect to pre-treatment severity in either the perception or production domain. Based on the treating clinicians' perceptual ratings of participants' performance in /r/ word probes administered in the pre-treatment evaluation phase, participants will be categorized as High Accuracy (\>10% accuracy) or Low Accuracy (\<=10% accuracy), a cutoff determined from pre-treatment baseline data aggregated over 11 studies previously conducted by our team. We will henceforth refer to these groups as "production accuracy groups." In addition, we will use the criteria adopted i
Sponsor Type
OTHER
Responsible Party
SPONSOR

Study Record Dates

First Submitted

April 22, 2025

First Posted

May 14, 2025

Study Start

May 10, 2025

Primary Completion (Estimated)

July 1, 2027

Study Completion (Estimated)

July 1, 2027

Last Updated

May 29, 2025

Record last verified: 2025-05

Data Sharing

IPD Sharing
Will not share

Locations