NCT07383753

Brief Summary

Cerebral palsy (CP) affects approximately 1 in 500 Canadian children, and the majority experience hand and arm limitations that impact independence, participation in daily activities, and overall quality of life. Many children require ongoing clinical assessments and therapy delivered in specialized centres, creating significant burden related to travel, scheduling, and interruptions to school and work. Barriers such as geography, socioeconomic factors, and pandemic-related service disruptions have further limited equitable access to in-person care. Although virtual care has expanded rapidly and families have expressed strong interest in hybrid care models, there is currently no validated approach for conducting comprehensive virtual hand-arm assessments for children with CP. Virtual administration of standardized assessments, individualized goal-based evaluations, and naturalistic observation tools has not been systematically studied. Evidence is urgently needed to determine which assessments can be administered virtually, how acceptable and feasible they are for families, and whether virtual and in-person assessment methods produce equivalent results.

Trial Health

77
On Track

Trial Health Score

Automated assessment based on enrollment pace, timeline, and geographic reach

Enrollment
100

participants targeted

Target at P50-P75 for all trials

Timeline
8mo left

Started Oct 2025

Geographic Reach
1 country

3 active sites

Status
recruiting

Health score is calculated from publicly available data and should be used for screening purposes only.

Trial Relationships

Click on a node to explore related trials.

Study Timeline

Key milestones and dates

Study Progress45%
Oct 2025Jan 2027

Study Start

First participant enrolled

October 22, 2025

Completed
2 months until next milestone

First Submitted

Initial submission to the registry

December 9, 2025

Completed
2 months until next milestone

First Posted

Study publicly available on registry

February 3, 2026

Completed
11 months until next milestone

Primary Completion

Last participant's last visit for primary outcome

January 1, 2027

Expected
Same day until next milestone

Study Completion

Last participant's last visit for all outcomes

January 1, 2027

Last Updated

February 17, 2026

Status Verified

February 1, 2026

Enrollment Period

1.2 years

First QC Date

December 9, 2025

Last Update Submit

February 12, 2026

Conditions

Keywords

Virtual assessmentTele-rehabilitationRemote upper limb assessmentVirtual carePediatric rehabilitation

Outcome Measures

Primary Outcomes (24)

  • Feasibility of Virtual Upper-Limb Assessments: Completion Rate of Virtual aROM

    The percentage of enrolled participants who complete the virtually administered active range of motion (aROM) assessment. Feasibility success is defined a priori as \>70% completion.

    From enrollment through completion of the 2 virtual assessments (approximately 2 weeks).

  • Feasibility of Virtual Upper-Limb Assessments: Completion Rate of Virtual Box and Block Test (BBT)

    The percentage of enrolled participants who complete the virtually administered Box and Block Test (BBT). Feasibility success is defined a priori as \>70% completion.

    From enrollment through completion of the 2 virtual assessments (approximately 2 weeks).

  • Feasibility of Virtual Upper-Limb Assessments: Completion Rate of Virtual Quality of Upper Extremity Skills Test (QUEST)

    The percentage of enrolled participants who complete the virtually administered Quality of Upper Extremity Skills Test (QUEST). Feasibility success is defined a priori as \>70% completion.

    From enrollment through completion of the 2 virtual assessments (approximately 2 weeks).

  • Feasibility of Virtual Upper-Limb Assessments: Completion Rate of Virtual SHUEE Spontaneous Functional Analysis (SHUEE-SFA)

    The percentage of enrolled participants who complete the virtually administered SHUEE Spontaneous Functional Analysis (SHUEE-SFA). Feasibility success is defined a priori as \>70% completion.

    From enrollment through completion of the 2 virtual assessments (approximately 2 weeks).

  • Feasibility of Virtual Upper-Limb Assessments: Recruitment and Attrition Metrics

    Recruitment rate (number of participants enrolled over the recruitment period), attrition rate (percentage of enrolled participants who do not complete the virtual assessment component), and documented reasons for non-eligibility, non-participation, or withdrawal.

    From enrollment through completion of participation (approximately 3-4 weeks).

  • Feasibility of Virtual Upper-Limb Assessments: Ability to Obtain Required Household Materials for Virtual Assessment

    The percentage of families who report successfully obtaining all required household materials needed to complete the virtual standardized assessments from home.

    Prior to commencement of virtual assessments (Week 1)

  • Feasibility of Virtual Upper-Limb Assessments: Technical, Environmental, Behavioral, and Cognitive Challenges During Virtual Sessions

    The number of technical (e.g., dropped connection), video quality (e.g., lighting, camera angle), behavioral (e.g., distraction), and cognitive (e.g., confusion) challenges observed during virtual assessment sessions, coded from session videos using the standardized Observational Checklist and Behavioural Observation Research Interactive Software (BORIS).

    During Virtual Assessment 1 and Virtual Assessment 2 (approximately 2 weeks)

  • Feasibility of Virtual Upper-Limb Assessments: Duration of Virtual and In-Person aROM

    Time required to complete the active range of motion (aROM) when administered virtually compared to in person, as recorded using the standardized Observational Checklist.

    During Virtual Assessment #1, Virtual Assessment #2 and In-Person Assessment Session #1 (approximately 3 weeks)

  • Feasibility of Virtual Upper-Limb Assessments: Duration of Virtual and In-Person BBT

    Time required to complete the Box and Blocks Test (BBT) when administered virtually compared to in person, as recorded using the standardized Observational Checklist.

    During Virtual Assessment #1, Virtual Assessment #2 and In-Person Assessment Session #1 (approximately 3 weeks)

  • Feasibility of Virtual Upper-Limb Assessments: Duration of Virtual and In-Person QUEST

    Time required to complete the QUEST when administered virtually compared to in person, as recorded using the standardized Observational Checklist.

    During Virtual Assessment #1, Virtual Assessment #2 and In-Person Assessment Session #1 (approximately 3 weeks)

  • Feasibility of Virtual Upper-Limb Assessments: Duration of Virtual and In-Person SHUEE-SFA

    Time required to complete the SHUEE-SFA when administered virtually compared to in person, as recorded using the standardized Observational Checklist.

    During Virtual Assessment #1, Virtual Assessment #2 and In-Person Assessment Session #1 (approximately 3 weeks)

  • Acceptability of Virtual aROM (Caregiver and Therapist)

    Acceptability of virtual administration of the active range of motion (aROM) assessment will be measured using a post-session Acceptability Survey administered via REDCap. After each assessment session, caregivers and the occupational therapist will each rate whether the virtual administration of the aROM was comparable or preferable to in-person administration. Acceptability success is defined a priori as \>70% comparable or preferable ratings.

    Immediately after each virtual and in-person assessment (weeks 1-3).

  • Acceptability of Virtual Box and Block Test (Caregiver and Therapist)

    Acceptability of virtual administration of the Box and Block Test will be measured using a post-session Acceptability Survey administered via REDCap. After each assessment session, caregivers and the occupational therapist will each rate whether the virtual administration of the BBT was comparable or preferable to in-person administration. Acceptability success is defined a priori as \>70% comparable or preferable ratings.

    Immediately after each virtual and in-person assessment (weeks 1-3).

  • Acceptability of Virtual QUEST (Caregiver and Therapist)

    Acceptability of virtual administration of the QUEST will be measured using a post-session Acceptability Survey administered via REDCap. After each assessment session, caregivers and the occupational therapist will each rate whether the virtual administration of the QUEST was comparable or preferable to in-person administration. Acceptability success is defined a priori as \>70% comparable or preferable ratings.

    Immediately after each virtual and in-person assessment (weeks 1-3).

  • Acceptability of Virtual SHUEE-SFA (Caregiver and Therapist)

    Acceptability of virtual administration of the SHUEE-SFA will be measured using a post-session Acceptability Survey administered via REDCap. After each assessment session, caregivers and the occupational therapist will each rate whether the virtual administration of the SHUEE-SFA was comparable or preferable to in-person administration. Acceptability success is defined a priori as \>70% comparable or preferable ratings.

    Immediately after each virtual and in-person assessment (weeks 1-3).

  • Equivalence of Virtual and In-Person aROM Scores

    Agreement between virtual and in-person assessment aROM scores (scored live, 2-3 days apart) will be examined using mean absolute differences, intraclass correlation coefficients (ICCs; target ICC ≥0.90, lower 95% CI \>0.60), Bland-Altman limits of agreement, and coefficient of variation (\<10%). Equivalence is met if 80% confidence limits fall within each test's minimally important change or smallest detectable difference. Factors contributing to low agreement (e.g., internet quality, scope of view) will be explored using session videos.

    Across the two virtual assessments and subsequent in-person assessment (approximately 3 weeks).

  • Equivalence of Virtual and In-Person BBT Scores

    Agreement between virtual and in-person assessment BBT scores (scored live, 2-3 days apart) will be examined using mean absolute differences, intraclass correlation coefficients (ICCs; target ICC ≥0.90, lower 95% CI \>0.60), Bland-Altman limits of agreement, and coefficient of variation (\<10%). Equivalence is met if 80% confidence limits fall within each test's minimally important change or smallest detectable difference. Factors contributing to low agreement (e.g., internet quality, scope of view) will be explored using session videos.

    Across the two virtual assessments and subsequent in-person assessment (approximately 3 weeks).

  • Equivalence of Virtual and In-Person QUEST Scores

    Agreement between virtual and in-person assessment QUEST scores (scored live, 2-3 days apart) will be examined using mean absolute differences, intraclass correlation coefficients (ICCs; target ICC ≥0.90, lower 95% CI \>0.60), Bland-Altman limits of agreement, and coefficient of variation (\<10%). Equivalence is met if 80% confidence limits fall within each test's minimally important change or smallest detectable difference. Factors contributing to low agreement (e.g., internet quality, scope of view) will be explored using session videos.

    Across the two virtual assessments and subsequent in-person assessment (approximately 3 weeks).

  • Equivalence of Virtual and In-Person SHUEE-SFA Scores

    Agreement between virtual and in-person assessment SHUEE-SFA scores (scored live, 2-3 days apart) will be examined using mean absolute differences, intraclass correlation coefficients (ICCs; target ICC ≥0.90, lower 95% CI \>0.60), Bland-Altman limits of agreement, and coefficient of variation (\<10%). Equivalence is met if 80% confidence limits fall within each test's minimally important change or smallest detectable difference. Factors contributing to low agreement (e.g., internet quality, scope of view) will be explored using session videos.

    Across the two virtual assessments and subsequent in-person assessment (approximately 3 weeks).

  • Test-Retest Reliability of Virtual aROM

    Reliability will be evaluated using ICCs for (a) the two virtual assessments performed one week apart (live scoring), and (b) live scoring versus video-based scoring. Paired t-tests will examine systematic differences across repeated sessions.

    Between Virtual Assessment 1 and Virtual Assessment 2 (1-week interval).

  • Test-Retest Reliability of Virtual BBT

    Reliability will be evaluated using ICCs for (a) the two virtual assessments performed one week apart (live scoring), and (b) live scoring versus video-based scoring. Paired t-tests will examine systematic differences across repeated sessions.

    Between Virtual Assessment 1 and Virtual Assessment 2 (1-week interval).

  • Test-Retest Reliability of Virtual QUEST

    Reliability will be evaluated using ICCs for (a) the two virtual assessments performed one week apart (live scoring), and (b) live scoring versus video-based scoring. Paired t-tests will examine systematic differences across repeated sessions.

    Between Virtual Assessment 1 and Virtual Assessment 2 (1-week interval).

  • Test-Retest Reliability of Virtual SHUEE-SFA

    Reliability will be evaluated using ICCs for (a) the two virtual assessments performed one week apart (live scoring), and (b) live scoring versus video-based scoring. Paired t-tests will examine systematic differences across repeated sessions.

    Between Virtual Assessment 1 and Virtual Assessment 2 (1-week interval).

  • Predictors of Feasibility, Acceptability, and Equivalence

    Logistic regression analyses will examine whether participant characteristics (age, disability level \[MACS\], gender, sex, ethnicity, socioeconomic status) predict three dichotomized outcomes: (1) completion of virtual standardized assessments (yes/no), (2) willingness to participate in future virtual assessments (yes/no), and (3) acceptable agreement between virtual and in-person assessment scores (yes/no).

    From enrollment through completion of participation (approximately 3-4 weeks).

Secondary Outcomes (10)

  • Feasibility of Caregiver-Recorded Activity Videos (PQRS)

    During weeks 1-3.

  • Acceptability of Caregiver-Recorded Videos

    Immediately after video submission (weeks 1-3).

  • Test-Retest Reliability of Perceived Quality Rating Scale (PQRS) Scores

    Across two video-recording periods separated by approximately 1 week

  • Feasibility of Gamified Assessment Tasks

    During the in-person assessment (week 3).

  • Acceptability of Gamified Assessment Tasks

    In-person assessment (Week 3).

  • +5 more secondary outcomes

Other Outcomes (2)

  • Qualitative Themes on Experiences With Virtual and In-Person Assessment

    Week 4

  • Integrated Mixed-Methods Findings on Feasibility, Acceptability, Equivalence and and Family/Clinician Experience

    After completion of all quantitative and qualitative analyses (approximately weeks 4-6 following the final participant visit)

Study Arms (1)

Children with Cerebral Palsy (CP)

children and adolescents aged 6 to 17 years with a confirmed diagnosis of cerebral palsy and functional hand-arm abilities classified within MACS Levels I-III. All participants must have the cognitive and physical capacity to participate in virtual assessments lasting up to 30 minutes at a time, access to a device and internet connection suitable for videoconferencing, and a caregiver willing to participate in study procedures and provide input on preferences and experiences. Participants complete a standardized series of virtual and in-person upper-limb assessments, wear wrist-worn inertial sensors at home for five days, and contribute caregiver- and child-reported feedback on feasibility, acceptability, and preferences.

Other: Upper-Limb Virtual and In-Person Assessment

Interventions

Participants complete a standardized upper-limb assessment protocol that includes two virtual videoconference assessments-delivered one week apart to evaluate test-retest reliability-and one in-person clinic assessment with the same research therapist to enable within-participant comparison of virtual and in-person scores. After each session, children, caregivers, and therapists complete brief surveys assessing feasibility, ease of completion, acceptability, and preferences. Participants also wear bilateral wrist-worn inertial sensors for five consecutive days at home to collect continuous data on naturalistic upper-limb activity. Families also provide caregiver-recorded videos of the child performing two preselected meaningful activities in their home environment. These videos are later scored using the Perceived Quality Rating Scale (PQRS) to evaluate individualized functional performance.

Children with Cerebral Palsy (CP)

Eligibility Criteria

Age6 Years - 17 Years
Sexall
Healthy VolunteersNo
Age GroupsChild (0-17)
Sampling MethodNon-Probability Sample
Study Population

The study population consists of children and adolescents aged 6 to 17 years with a confirmed diagnosis of cerebral palsy who have functional hand and arm abilities classified within Manual Ability Classification System (MACS) Levels I-III. Participants must have sufficient cognitive and physical capacity to participate in virtual videoconference-based assessments for up to 30 minutes at a time and must have access to an internet-enabled device suitable for completing virtual sessions. Each participant must be accompanied by a caregiver who can assist with study procedures, provide demographic and experience-related information, and record brief home activity videos.

You may qualify if:

  • Have a diagnosis of Cerebral Palsy
  • Are between 6 to 17 years old with sufficient cognitive capacity and cooperation to sit without a break for 30 minutes at a time
  • MACS levels I (handles objects easily) to III (handles objects with difficulty)
  • No visual limitations that would interfere with video conferencing
  • Has a caregiver willing to participate and can questions about preferences
  • Have an appropriate device and internet access for video conferencing

You may not qualify if:

  • \- Active treatments (e.g. Botulinum Toxin injections or constraint therapy in the last two months, or upper extremity surgery in the last 6 months) that might impact upper limb function stability over the study period.

Contact the study team to confirm eligibility.

Sponsors & Collaborators

Study Sites (3)

Grandview Kids

Ajax, Ontario, L1T 0R3, Canada

RECRUITING

Children's Hospital of Eastern Ontario

Ottawa, Ontario, K1H 8L1, Canada

RECRUITING

Holland Bloorview Kids Rehabilitation Hospital

Toronto, Ontario, M4G 1R8, Canada

RECRUITING

MeSH Terms

Conditions

Cerebral Palsy

Condition Hierarchy (Ancestors)

Brain Damage, ChronicBrain DiseasesCentral Nervous System DiseasesNervous System Diseases

Study Officials

  • Elaine Biddiss, PhD

    Holland Bloorview Kids Rehabilitation Hospital, Bloorview Research Institute

    PRINCIPAL INVESTIGATOR
  • Virginia Wright, PhD

    Holland Bloorview Kids Rehabilitation Hospital, Bloorview Research Institute

    PRINCIPAL INVESTIGATOR

Central Study Contacts

Study Design

Study Type
observational
Observational Model
CASE ONLY
Time Perspective
PROSPECTIVE
Sponsor Type
OTHER
Responsible Party
PRINCIPAL INVESTIGATOR
PI Title
Principal Investigator

Study Record Dates

First Submitted

December 9, 2025

First Posted

February 3, 2026

Study Start

October 22, 2025

Primary Completion (Estimated)

January 1, 2027

Study Completion (Estimated)

January 1, 2027

Last Updated

February 17, 2026

Record last verified: 2026-02

Data Sharing

IPD Sharing
Will not share

Locations