Comutti - A Research Project Dedicated to Finding Smart Ways of Using Technology for a Better Tomorrow for Everyone, Everywhere.
COMUTTI
1 other identifier
interventional
33
1 country
1
Brief Summary
According to World Health Organization, worldwide one in 160 children has an ASD. About around 25% to 30% of children are unable to use verbal language to communicate (non-verbal ASD) or are minimally verbal, i.e., use fewer than 10 words (mv-ASD). The ability to communicate is a crucial life skill, and difficulties with communication can have a range of negative consequences such as poorer quality of life and behavioural difficulties. Communication interventions generally aim to improve children's ability to communicate either through speech or by supplementing speech with other means (e.g., sign language, pictures, or AAC - Advanced Augmented Communication tools). Individuals with non- verbal ASD or mv-ASD often communicate with people through vocalizations that in some cases have a self-consistent phonetic association to concepts (e.g., "ba" to mean "bathroom") or are onomatopoeic expressions (e.g., "woof" to refer to a dog). In most cases vocalizations sound arbitrary; even if they vary in tone, pitch, and duration depending it is extremely difficult to interpret the intended message or the individual's emotional or physical state they would convey, creating a barrier between the persons with ASD and the rest of the world that originate stress and frustration. Only caregivers who have long term acquaintance with the subjects are able to decode such wordless sounds and assign them to unique meanings. This project aims at defining algorithms, methods, and technologies to identify the communicative intent of vocal expressions generated by children with mv-ASD, and to create tools that help people who are not familiar with the subjects to understand these individuals during spontaneous conversations.
Trial Health
Trial Health Score
Automated assessment based on enrollment pace, timeline, and geographic reach
participants targeted
Target at P25-P50 for not_applicable
Started Jul 2021
Longer than P75 for not_applicable
1 active site
Health score is calculated from publicly available data and should be used for screening purposes only.
Trial Relationships
Click on a node to explore related trials.
Study Timeline
Key milestones and dates
Study Start
First participant enrolled
July 27, 2021
CompletedFirst Submitted
Initial submission to the registry
November 25, 2021
CompletedFirst Posted
Study publicly available on registry
December 8, 2021
CompletedPrimary Completion
Last participant's last visit for primary outcome
December 31, 2024
CompletedStudy Completion
Last participant's last visit for all outcomes
December 31, 2024
CompletedMay 13, 2025
May 1, 2025
3.4 years
November 25, 2021
May 7, 2025
Conditions
Keywords
Outcome Measures
Primary Outcomes (3)
Frequency of audio signal samples and their associated labels
Frequency (measured in number per hour) of audio signal samples (sounds and verbalizations) produced by each participant recorded during the hospital stays, in various contexts (i.e., during educational interventions and / or in moments of unstructured play) labeled as self-talk, delight, dysregulation, frustration, request, or social exchange. A small, wireless recorder (Sony TX800 Digital Voice Recorder TX Series) will be attached to the participant's clothing using strong magnets. Next, the adults (caregiver and / or operators) must associate the sounds produced by the child to an affective and / or to the probable meaning of the vocalization -labels- through the use of a web app.
immediately after the intervention
Participant-specific harmonic features derived by the audio signal samples
Temporal and spectral audio features -i.e., pitch-related features, formants features, energy-related features, timing features, articulation features- extracted from the samples and used next for supervised and unsupervised machine learning analysis. The collected audio signal samples will be segmented in the proximity of the temporal locations of labels. Next, it will be segmented and associated with temporally adjacent labels (affective states or probable meaning of vocalizations). Audio harmonic features (temporal/phonetic characteristics) will be then identified for each participant using supervised/unsupervised machine learning analysis of audio signal samples. Through this process, participant-specific patterns corresponding to specific communications purposes or emotional states will be identified.
immediately after the intervention
Accuracy of machine learning prediction
The classification accuracy of machine learning analysis, i.e., the number of correct predictions divided by the total number of predictions, which will be tested in a retained test set of recorded audio signal samples. This outcome measures will estimate the usability/utility of the developed tool for vocalization interpretion based on a machine learning analysis of the recorded audio signal samples.
immediately after the intervention
Study Arms (1)
Experimental: audiosignal dataset creation and machine learning analysis
EXPERIMENTALExperimental: audiosignal dataset creation and processing; machine learning analysis, empirical evaluations
Interventions
Clinical evaluation of participants by means of Autism Diagnostic Observation Schedule
The project tests and adapts the technology developed at MIT for vocalization collection and labeling, and contributes to data gathering among Italian subjects (and their quality validation) in order to create a multi-cultural dataset and to enable cross-cultural studies and analyses. Next, the focus is placed on the analysis of harmonic features of the audio in the vocalizations of the dataset to identify recurring individual features and patterns corresponding to specific communications purposes or emotional states. Supervised and unsupervised machine learning approaches are developed and different machine learning algorithms will be compared to identify the most accurate ones for the project goal. Last, an exploratory evaluation of the vocalization-understanding machine learning model is conducted to test the usability and utility of the tool for vocalization interpretation.
Eligibility Criteria
You may qualify if:
- having a clinical diagnosis of autism spectrum disorder according to DSM-5 criteria
- use fewer than 10 words
You may not qualify if:
- using any stimulant or non-stimulant medication affecting the central nervous system
- having an identified genetic disorder
- having vision or hearing problems
- suffering from chronic or acute medical illness
Contact the study team to confirm eligibility.
Sponsors & Collaborators
- IRCCS Eugenio Medealead
- Politecnico di Milanocollaborator
- Massachusetts Institute of Technologycollaborator
Study Sites (1)
Scientific Institute, IRCCS Eugenio Medea
Bosisio Parini, Lecco, 23842, Italy
MeSH Terms
Conditions
Condition Hierarchy (Ancestors)
Study Officials
- PRINCIPAL INVESTIGATOR
Alessandro Crippa, Ph.D.
IRCCS Eugenio Medea
Study Design
- Study Type
- interventional
- Phase
- not applicable
- Allocation
- NA
- Masking
- NONE
- Purpose
- BASIC SCIENCE
- Intervention Model
- SINGLE GROUP
- Sponsor Type
- OTHER
- Responsible Party
- SPONSOR
Study Record Dates
First Submitted
November 25, 2021
First Posted
December 8, 2021
Study Start
July 27, 2021
Primary Completion
December 31, 2024
Study Completion
December 31, 2024
Last Updated
May 13, 2025
Record last verified: 2025-05
Data Sharing
- IPD Sharing
- Will not share