Evaluating Verbal Communication in Structured Interactions: Theoretical and Clinical Implications
1 other identifier
interventional
300
1 country
1
Brief Summary
The goal of this clinical trial is to learn about the effect of communicative interaction on verbal communication in people with amyotrophic lateral sclerosis (ALS) and age-matched speakers. The question is, What are the effects of communicative interaction on verbal communication in people with ALS? Participants will read words and sentences while they are in a solo setting and interactive setting.
Trial Health
Trial Health Score
Automated assessment based on enrollment pace, timeline, and geographic reach
participants targeted
Target at P75+ for not_applicable
Started Nov 2024
Longer than P75 for not_applicable
1 active site
Health score is calculated from publicly available data and should be used for screening purposes only.
Trial Relationships
Click on a node to explore related trials.
Study Timeline
Key milestones and dates
First Submitted
Initial submission to the registry
January 29, 2024
CompletedFirst Posted
Study publicly available on registry
February 20, 2024
CompletedStudy Start
First participant enrolled
November 5, 2024
CompletedPrimary Completion
Last participant's last visit for primary outcome
February 28, 2029
ExpectedStudy Completion
Last participant's last visit for all outcomes
February 28, 2029
January 13, 2025
January 1, 2025
4.3 years
January 29, 2024
January 9, 2025
Conditions
Outcome Measures
Primary Outcomes (5)
Formant frequencies of speech sounds
Formant frequencies that characterize speech sounds will be made on speech recorded in the intervention task.
two 60 minute sessions
Intelligibility of recorded speech
Perceptual judgments will be provided by solo, naïve listeners who did not participate in the interactions. Listeners will hear recorded speech of PALS and age-matched speakers recorded across the different tasks and indicate what they heard. The score will be expressed in percent. The possible range is between 0-100%. The higher scores mean a better outcome.
two 60 minute sessions
Syntactic properties
Syntactic complexity in the unstructured communication task will be measured through mean length of grammatical units, clausal density, and clause type. Each variable will be assessed at both the dyadic level (e.g., clausal density for both interlocutors together) and at the level of the individual speaker (e.g., clausal density of each speaker). A composite of these measures will provide an index of the syntactic complexity of the conversation.
One 60 minute session
Pragmatic Properties
The investigators will count the number and duration of silent portions of speech, filled pauses, linguistic mazes, speaking turns, and interruptions in the unstructured communication task. A composite measure of the individual measures will provide an index of an individual's contribution to the conversation.
One 60 minute session
Duration of speech sounds
Durations that characterize speech sounds will be made on speech recorded in the intervention task.
two 60 minute sessions
Study Arms (1)
People with amyotrophic lateral sclerosis, age-matched speakers
EXPERIMENTALPeople with ALS and age-matched speakers will participate in structured communicative interaction.
Interventions
Two interlocutors, one with ALS and a typical, unfamiliar interlocutor or an age-matched speaker and a typical interlocutor, will work together. On each trial, one of the interlocutors will be randomly chosen to be the "speaker" and the other will be the "listener". Each participant in the pair will view the same set of words on their screens. After one second, one of the words will be highlighted on the speaker's screen, they will say the word in the phrase "Click on the \_\_\_\_ this time", and the listener will click on it. After the listener has made their selection, both participants will receive feedback on trial success.
PALS and age-matched speakers will read critical words consisting of target segments (e.g., "hid", "ship", "net") in random order four times. These words will be embedded in the carrier phrase "Click on the \_\_\_ this time." Participations will be instructed to overenunciate the critical words.
Two interlocutors, one with ALS and a typical, unfamiliar interlocutor or an age-matched speaker and a typical interlocutor, will work together. The pairs will be presented with two different versions of the same picture with eight differences chosen to elicit the same target segments (e.g., "hid", "ship", "net"). These pictures will be modified from the LUCID corpus. In total, pairs will complete four picture sets per session. Pairs will be given 5 minutes for each picture set.
Two interlocutors, one with ALS and a typical, unfamiliar interlocutor or an age-matched speaker and a typical interlocutor, will work together. On each trial, one of the interlocutors will be randomly chosen to be the "speaker" and the other will be the "listener". Each participant in the pair will view the same set of words on their screens. After one second, one of the words will be highlighted on the speaker's screen, they will say the word in the phrase "Click on the \_\_\_\_ this time", and the listener will click on it. The speaker will be instructed to overenunciate the critical words. After the listener has made their selection, both participants will receive feedback on trial success.
Eligibility Criteria
You may qualify if:
- Speakers with amyotrophic lateral sclerosis (ALS) (PALS-people with ALS)
- diagnosis of ALS following the revised EL Escorial criteria
- no history of other neurological conditions (e.g., stroke)
- no cognitive impairment assessed by Telephone Montreal Cognitive Assessment (mini MoCA)
- detectable speech disturbance according to the ALS Functional Rating Scale-Revised (ALSFRS-R)
- the ability to produce single words
- being a native speaker of American English (AE).
- Age-matched Speakers
- passing the remote hearing screening
- having no known speech, language, or neurological disorders per self-report
- no cognitive impairment assessed by Telephone Montreal Cognitive Assessment (mini MoCA)
- being a functionally native monolingual speaker of American English.
- Unfamiliar Interlocutors
- passing the remote hearing screening
- having no known speech, language or neurological disorders per self-report
- +3 more criteria
Contact the study team to confirm eligibility.
Sponsors & Collaborators
Study Sites (1)
Speech Core, Pennsylvania State University
University Park, Pennsylvania, 16802, United States
Related Publications (2)
Olmstead AJ, Lee J, Viswanathan N. The Role of the Speaker, the Listener, and Their Joint Contributions During Communicative Interactions: A Tripartite View of Intelligibility in Individuals With Dysarthria. J Speech Lang Hear Res. 2020 Apr 27;63(4):1106-1114. doi: 10.1044/2020_JSLHR-19-00233. Epub 2020 Apr 17.
PMID: 32302251BACKGROUNDOlmstead, A. J., Viswanathan, N., Cowan, T., & Yang, K. (2021). Phonetic adaptation in interlocutors with mismatched language backgrounds: A case for a phonetic synergy account. Journal of Phonetics, 87, 101054.
BACKGROUND
MeSH Terms
Conditions
Condition Hierarchy (Ancestors)
Study Officials
- PRINCIPAL INVESTIGATOR
Jimin Lee, PhD
The Pennsylvania State University
- PRINCIPAL INVESTIGATOR
Navin Viswanathan, PhD
The Pennsylvania State University
- PRINCIPAL INVESTIGATOR
Anne Olmstead, PhD
The Pennsylvania State University
Central Study Contacts
Study Design
- Study Type
- interventional
- Phase
- not applicable
- Allocation
- NA
- Masking
- NONE
- Purpose
- BASIC SCIENCE
- Intervention Model
- SINGLE GROUP
- Sponsor Type
- OTHER
- Responsible Party
- PRINCIPAL INVESTIGATOR
- PI Title
- Associate Professor
Study Record Dates
First Submitted
January 29, 2024
First Posted
February 20, 2024
Study Start
November 5, 2024
Primary Completion (Estimated)
February 28, 2029
Study Completion (Estimated)
February 28, 2029
Last Updated
January 13, 2025
Record last verified: 2025-01
Data Sharing
- IPD Sharing
- Will share
- Shared Documents
- STUDY PROTOCOL, SAP
- Time Frame
- Data will be shared after publication of the study.
- Access Criteria
- All deidentified data, metadata, and related tools will be freely available via Open Science Framework (OSF). The original audio files will be made available (for participants who consent) by request from researchers in the field to ensure responsible use.
The project will involve the collection of audio samples, perceptual judgments, and questionnaire data. All deidentified data will be shared through the Open Science Framework (osf.io). All scripts, protocols, procedures, and analyses will be shared along with the deidentified data to ensure that other researchers can verify and build on the presented results. All shared data will be made available in a format that is accessible by open-access software (e.g., R, Open Office, pdf reader). In addition, platform-specific scripts (e.g., experimental software LabVanced) will be shared with explanations so that they may be implemented across different software environments.