Maggie Mosher, Ph.D.


Maggie Mosher
  • Assistant Research Professor at the Achievement & Assessment Institute of the University of Kansas

Contact Info


Biography

During my over 22 years in special education as a teacher, coordinator, building administrator, and district technology trainer, I have delved into innovative technology (e.g., extended reality, AI, machine learning), evidence-based practices (e.g., video modeling, social narratives, role play), and interventions (e.g., VOISS, iKNOW) for students with high incidence disabilities. I have given over 100 local, national, and international presentations on evidence-based practices (EBPs), interventions, screening, and assessments to assist students with disabilities. In the past ten years, I collaboratively obtained over 9 million dollars in grants and contracts focused on using innovative technologies and EBPs to create interventions, assessments, and tools for improving SEB competencies and preventing challenging behaviors, particularly for adolescents with high-incidence disabilities from diverse backgrounds.

This proposed research aims to develop and pilot an innovative progress monitoring tool embedded within a collaborative process (iHOPE) to assist educators in meeting the social-emotional-behavioral (SEB) needs of 5th-8th grade students with high-incidence disabilities. I use SEB together in my work because, as the Department of Education (Cardona & Neas, 2021) and United Nations (2019) report, social, emotional, and behavioral competencies are critical and interrelated components of mental health and should be integrated. My developing lines of research are reflected in my 9 peer-reviewed publications in top-tiered journals and my work toward establishing integrated, innovative, efficient, comprehensive, collaborative, and effective strategies for educators to support the SEB development of adolescents. My overarching goal is to prevent SEB issues while bolstering health, ultimately improving student relationships and academic success.

Collaborating with Smith, Rowland, and Frey, we've worked with thousands of educators to implement valid and reliable social narratives and progress monitoring components within 140 virtual reality scenarios (VOISS). This effort led to creating a screening tool and a free website guiding educators and parents in identifying and applying targeted SEB skills from the simulation to daily life. My service as a peer reviewer, conference reviewer, member of the State of Kansas Exemplary Educator Network, and involvement in inclusion and social justice initiatives provides me with various stakeholders to collaborate with throughout this grant. With the proposed career development and expert guidance, I'm well-prepared to complete this research, contributing insights into cultural developmental SEB priorities, collaborative approaches, and technology's role in progress monitoring SEB competencies. This knowledge will assist educators in determining in a feasible and timely manner if SEB instruction requires adaptation to better support the development of students with high-incidence disabilities.

Education

B.S. in Education: Intervention Specialist, Concentration in Mild to Moderate Disabilities with a Literacy Endorsement, Franciscan University, 2005
M.A. in Master of Science in School Leadership: Concentration in Learning Systems and Curriculum Design, Baker university, 2010
Ph.D. in Special Education: Instructional Design, Technology, and Innovation with a minor in Research Methodology, University of Kansas, 2023

Research

Perhaps my most significant contribution to the field of research is the creation of the Technology Immersion Presence Scale (TIPS), the first scale to measure presence within virtual environments that was developed at a 3rd-grade reading level for people ages 8 through adulthood with disabilities. The scale utilizes a Likert Scale with visuals of thumbs up and thumbs down to indicate agreement versus disagreement with their corresponding numbers. The scale is being published after completing rigorous reliability and validity checks with a population of 300 individuals with disabilities in 38 US states. The collaboration on the creation of VOISS (virtual reality app for social skill development) and VOISS Advisor (a free fully-manualized website) provided valuable knowledge to the field on whether the level of immersion (i.e., head-mounted display vs. iPad vs. Chromebook) influences SEB skill application, generalization, and maintenance. The initial findings of the randomized control trial (Carreon et al., 2023) showed all platforms (fully immersive and non-immersive) had significant gains, and there was no significant difference across platforms in SEB knowledge or application of skills. This study has undergone generalization and maintenance follow-ups. Findings show, over time, the knowledge gained within fully immersive simulations is retained and applied significantly longer than in non-immersive environments (Mosher et al., in press).

Over the past four years, our VOISS team has conducted rigorous studies using single case design methodology as well as randomized control trials to test various components of the virtual technology delivered intervention and supports (e.g., SEB skill domain, levels of immersion, collaboration of educators and parents, educator use of generalization tactics with students). These studies are being prepared for five publications exploring various aspects of the intervention, evidence-based practices within the intervention (i.e., social narratives, video modeling, role play), and technology delivering the intervention. In addition, we have conducted extensive psychometric studies of the VOISS screening tool and descriptive inquiry of the social validity of specific skills within current SEB assessment tools.

Within this proposed grant, I hope to develop a PM tool that will allow a collaborative decision as to whether to use full AI capabilities (i.e., collect observational data, make recommendations based on the data) or utilize just the speech recognition, natural language processing, speaker identification, and automated transcription portion of AI capabilities (i.e., the machine learning and large language model abilities) in which no recommendations are made. However, data is still collected and transcribed for educator evaluation. Providing both of these capabilities within an app will be significant because some cultures and populations are unsure of using various forms of AI within schools. However, machine learning and large language models are readily available in most classrooms through software such as Google Classrooms Transcribe Microphone in Google Docs and Excel and PowerPoint tables to analyze data and make different visuals from the data or slides placed in the program. Both levels of AI will work together to determine a speaker’s voice and track when they are louder and softer than a targeted peer or when one student in a small group interrupts other speakers in a small group more often than a targeted peer. However, the level after that point would be adjustable by the user.