BUILDING IT WHEN IT DOES NOT YET EXIST: CREATING AND VALIDATING A SCALE TO MEASURE PERCEIVED BURDEN IN NEEDS ASSESSMENT PRACTICE
Whereas it is a valuable tool for instructional designers and performance improvement practitioners, needs assessment is often avoided due to perceived burdens associated with the process. Given the lack of study of perceived burden within the literature, there was no known existing scale to measure perceived burden. This article describes the process of conceptualizing perceived burden in needs assessment and developing the first scale to measure that construct: the Perceived Burden in Needs Assessment Participants Survey (PBNAPS). Through examining the performance of a pilot instrument, the authors explored the validity and reliability of the PBNAPS. The instrument was found to be reliable (α = .86) across four factors: (a) duties, obligations, and responsibilities; (b) cost; (c) needs assessment facilitator skills; and (d) needs assessment facilitator systemic sensitivities. Ultimately, the final revised PBNAPS instrument demonstrated both internal consistency and applicability across organizational contexts, constituent types, and lengths of affiliation.
The PBNAPS is the first of its kind to examine the needs assessment participant experience in an explicit and deliberate way.
When the perceived burdens of needs assessment overshadow its inherent value, practitioners are less likely to conduct needs assessments, and organizations fail to benefit from properly contextualized performance improvement interventions (Hopfl, 1994; Marshall & Rossett, 2014; Pinckney-Lewis, 2021; Zemke, 1998).
INTRODUCTION
Needs assessment (NA) evaluates what is needed to best address a problem or gap in performance, making it a valuable tool in the instructional design (ID) and human performance technology (HPT) spaces (Morrison et al., 2013; Sleezer et al., 2008; Stefaniak et al., 2018). Considering that ID is the “science and art of creating detailed specifications for the development, evaluation, and maintenance of situations which facilitate learning and performance” (Richey et al., 2011, p. 3), NA is a part of the science of ID. Collecting NA data enables the creation of detailed specifications that facilitate learning and performance improvements. NA is also a staple in HPT or the “study and ethical practice of improving productivity in organizations by designing and developing effective interventions that are results-oriented, comprehensive, and systemic” (Pershing, 2006, p. 6). Both ultimately aim to improve performance, which is also a goal of HPT. Whereas there are several models of NA, this research adopts this operational definition: the data-driven search for opportunities to maximize individual, team, or organizational performance by contributing to the effectiveness, efficiency, and/or ease of supporting organizational goals (Pinckney-Lewis, 2021, 2022; Pinckney-Lewis & Baaki, 2020; Stefaniak & Pinckney, 2023).
Although an important practice, the way in which NA is conducted, how often it is conducted, and how it is valued within organizational contexts varies. Classical approaches regard NA as a fundamental, formal process (Lippitt et al., 1958; Zemke, 1998), whereas naturalistic approaches recognize that NA may not be feasible given time and resource constraints (Cervero & Wilson, 2006; Zemke, 1998). When they are given the space to be conducted, NAs range in the level of rigor and time investment from facilitators. They also range in the level of awareness and involvement on the part of the participants and stakeholders within the process. On the other hand, there are many times when they are neglected. Clients often avoid NA for several reasons (Adams et al., 2021; Kaufman & Guerra-López, 2013; Zemke, 1998), including perceptions of burden within the process (Pinckney-Lewis, 2021, 2022). In fact, practitioners often go so far as to completely relabel the process (i.e., call the NA process by some other name) (Adams et al., 2021) to limit perceptions of burden (Pinckney-Lewis, 2022).
Yet there is no existing proof that the perception of burden within the NA experience is an actual reality. It would be a disservice to organizations to blindly disavow NA practice due to unfounded beliefs. When the perceived burdens of NA overshadow its inherent value, practitioners are less likely to conduct NA, and organizations fail to benefit from properly contextualized performance improvement interventions (Hopfl, 1994; Marshall & Rossett, 2014; Stefaniak & Pinckney, 2023; Zemke, 1998).
Participant experiences within the NA process and their perceived burden are largely absent from the ID and HPT literature. A search of prominent ID and HPT journals, including Educational Communication and Technology, Educational Technology Research & Development, International Education Studies, Performance Improvement, and Performance Improvement Quarterly, did not yield any scales related to perceived burden in NA (Pinckney-Lewis, 2022). As practitioners within the ID space, when something is conceptualized but does not yet exist, we design and build it. That is exactly the case within the current research. Having a valid and reliable instrument to measure the perceived burdens of NA participants not only honors and prioritizes the experiences of participants, but also provides crucial feedback to inform NA practices and demystify any unwarranted claims about the NA experience.
To fill this gap, this study seeks to validate the Perceived Burden in Needs Assessment Participants Survey (PBNAPS) instrument as a measure of perceptions of burden among participants in the NA process, including managers, leaders, and other stakeholders. Specifically, this article addresses the following research questions: (a) How can the construct of perceived burden be conceptualized and measured? (b) To what extent is the revised PBNAPS instrument internally consistent? This article reports the outcomes of (a) revisiting the literature to better approximate to the perceived burden construct and its components, (b) applying best practices in survey construction to refine a preliminary version of the survey instrument in accordance with the literature, (c) conducting a beta review of the revised instrument with a panel of subject matter experts (SMEs), and (d) conducting an exploratory factor analysis (EFA) of the PBNAPS.
Summary of Previous Research on Perceived Burden in Needs Assessment
This study builds on Pinckney-Lewis and Baaki (2020) by revisiting that instrument, which conceptualized perceived burden in NA as (a) lack of humanism or the erroneous prioritization of prescribed technical process steps (Leigh et al., 2000; Wilson & Cervero, 1996), the sole use of quantitative Likert-scale survey data (Witkin, 1994), or an inappropriate top-down approach to addressing needs (Altschuld & Watkins, 2014); (b) a problem mindset, including harboring negative connotations about NA (Kaufman & Guerra-López, 2013) or perceived time demands (Zemke, 1998); (c) inconvenience of involvement or the extent to which client and organizational involvement decreases over time (Kaufman & Guerra-López, 2013); and (d) implementation of recommendations, which could lead to burden via cognitive dissonance if those recommendations counter the expectations of the clients or stakeholders (Kaufman, 1977). When operationalized, these exploratory survey items lacked sufficient reliability (α = .48), and the survey subscales ranged in how well they correlated with the overall measurement. This research revisits the literature to incorporate more rigor into the instrument development process.
Revisiting the Literature
Whether addressed via the lens of cognitive load theory (Beckmann, 2010), change management theory (Hopfl, 1994), or expectancy-value theory (Flake et al., 2015), burden has proved challenging to measure. The nature of burden and how it is perceived is complex, especially within the specific area of the NA process. Therefore, it was important to revisit the literature both systematically and with an appreciation for the intricacy of the burden construct.
Burden
Burden generally refers to a load, either literal or figurative, that is often heavy or negatively connotated. There are many ways in which people experience burden within their life experiences. Much of what is published on the human experience of burden falls within the medical domain, e.g., the Disease Morbidity Assessment (Wijers et al., 2017), Perceived Family Burden Scale (Levene et al., 1996), and Perceived Stress Scale (Nielsen et al., 2016). However, the concept of burden is not documented in the NA participant experience literature (Pinckney-Lewis, 2022).
Redefining Burden in Needs Assessment
When it comes to the NA experience, there are three angles that apply across participant and stakeholder roles: (a) what they are asked to do (i.e., duties, obligations, and responsibilities), (b) what they must give up to accomplish what they are asked to do (i.e., cost), and (c) how they experience interactions with NA practitioners while engaged in the related tasks. Each of these concepts are displayed in Table 1 and explained further in the sections that follow.
Duty, obligation, and responsibility.
In the legal sense, burden can be defined as something that is a duty, obligation, or responsibility (Merriam-Webster, n.d.). Many formal NAs do not take place without ownness placed on participants and stakeholders in addition to the facilitator. Even clients and organizational leaders provide project scoping and oversight (Altschuld & Kumar, 2010; Witkin & Altschuld, 1995). They may serve as the gatekeepers to data access (Kaufman, 1977; Kaufman & Guerra-López, 2013; Rossett, 1982; Stefaniak et al., 2018) and/or otherwise serve as participants in the data collection process itself (Altschuld & Kumar, 2010; Leigh et al., 2000).
Cost.
Within the framework of motivation science expectancy-value models (Eccles, 2005; Flake et al., 2015), Eccles (2005) defined cost as “what an individual has to give up to do a task, as well as the anticipated effort one will need to put into task completion” (p. 113). Whereas initially applied within the educational context, the framework also lends itself to NA contexts such that perceived cost reflects what an individual gives up to complete NA tasks and the effort the individual anticipates putting into those tasks.
Experience of Interactions with Practitioners.
Finally, the ways in which NA participants and stakeholders perceive practitioners rounds out the third dimension of burden operationalized within this research. The extent that facilitators exhibit technical and people skills while navigating the organizational social system can also contribute to how participants experience burden. NA is an inherently social process (Wilson & Cervero, 1996), so facilitators must not only be able to technically execute their NA practice, but must also have a firm understanding of the social system(s) in which they are operating. They must navigate the sociopolitical dynamics and existing organizational culture to obtain and sustain the buy-in and trust required to make meaningful contributions (Altschuld & Kumar, 2010; Kaufman & Guerra-López, 2013). When practitioners fail to acknowledge organizational politics, they run the risk of misinterpreting or misrepresenting the nature of the actual needs (Forester, 1989).
Methods
This article pulls from a larger convergence model of triangulation mixed methods study exploring several aspects of perceived burden but focuses solely on the quantitative process of the PBNAPS development, evaluation, and validation. To bring the revised PBNAPS to fruition, the authors applied best practices in survey construction, conducted a beta review of the instrument, and performed an EFA.
Survey Scale Development
Through a combination of revising the Pinckney-Lewis and Baaki (2020) items, modifying items from the Flake et al. (2015) expectancy-value scale, and creating new items to align with this revised conceptualization of perceived burden, the authors drafted a revised PBNAPS as indicated in Figure 1.



Citation: Performance Improvement Quarterly 37, 3; 10.56811/PIQ-21-0042
The PBNAPS includes the following subscales to align with the main components that emerged from the literature: (a) perceptions of duty, obligation, and responsibility (PDOR); (b) perceptions of cost (POC); (c) perceptions of practitioner skills (PPS, e.g., perceived appropriateness of the practitioner’s technical and people skills); and (d) perceived systemic sensitivity of the practitioner (PSSP, e.g., treatment of power dynamics, competing interests, negotiation skills, and personal responsibility).
In addition to adhering to this revised conceptualization, the survey included sections to obtain informed consent; demographic data; and a combination of Likert, multiple choice, and open-ended items. To ensure sufficient construct representation of attitudes and perceptions, each of the subscales included six to eight Likert items per subscale (Subedi, 2016; Thorndike & Thorndike-Christ, 2010).
Beta Review and Pilot to Enhance Content and Construct Validity
Five SMEs with expertise in NA and/or survey development participated in the beta review and pilot (Hays & Singh, 2012; Thorndike & Thorndike-Christ, 2010; Worthington & Whittaker, 2006) simultaneously in a two-pronged process. For those SMEs who had recently participated in an NA as a data provider, client, or stakeholder, they first piloted the items based on their experiences, indicating how well they agreed with each statement based on a seven-point Likert scale from “strongly disagree” to “strongly agree.” Next, the SMEs reviewed the items for wording choice and fit to the perceived burden construct and subscale definitions. These preliminary data allowed for data-informed decisions on which items to keep, revise, or remove from the final survey prior to deployment. As a result, the PBNAPS retained and operationalized 25 items with six to seven items per subscale as indicated in Figure 2.



Citation: Performance Improvement Quarterly 37, 3; 10.56811/PIQ-21-0042
Balance in Survey Item Directionality to Support Reliability
One of the limitations in the Pinckney-Lewis and Baaki (2020) survey was its poor performance potentially due to a lack of balance of negatively and positively worded items. When scales include a proper balance of directionality, the overall survey performance is enhanced, decreasing the prospect of acquiescence in responses (Thorndike & Thorndike-Christ, 2010). In addition to the SME word choice feedback from the beta review, the authors considered the directionality of these items when making the final edits. Figure 3 shows the comparison of item directionality from the previous version to the current PBNAPS, which maintains a near equal split for overall survey balance with 12 positively worded items and 13 negatively worded items.



Citation: Performance Improvement Quarterly 37, 3; 10.56811/PIQ-21-0042
Appropriate Likert Scale Demarcations to Support Internal Validity
Whereas there is an ongoing debate over the use of even-numbered versus odd-numbered Likert demarcations, maintaining an odd number does allow respondents to have a neutral option (Fink, 2013; Thorndike & Thorndike-Christ, 2010). In this case, there was value in the neutral option because (a) this was an exploratory look at how burden is perceived and (b) forcing participants to select responses toward either pole would be unfair given the research intentionally not limiting the scope of participants by organizational context or formality/thoroughness of the NA process. One change from the 2020 version, though, was to increase the Likert demarcations from five to seven points in granularity for more accurate data and a more appropriate measure of central tendency without sacrificing reliability (Finstad, 2010; Foddy, 1994; Miller, 1956; Thorndike & Thorndike-Christ, 2010).
Internal Consistency, Construct Validity, and Component Structure of the PBNAPS
To assess the internal consistency of the PBNAPS, the authors calculated the survey’s reliability as measured by Cronbach’s alpha, which is widely accepted as an appropriate measure of reliability (DeVellis, 2017; Pallant, 2016; Thorndike & Thorndike-Christ, 2010). Then, they calculated the overall survey reliability as well as that of each of the subscales therein as an indicator of how internally consistent and reliable the refined instrument is. Table 2 displays the overall PBNAPS Item Correlation Matrix which was needed to ensure the items are measuring the same underlying construct. Finally, they examined whether or not each of the PBNAPS subscales correlates with the overall measure as well as with each other via Pearson’s r (Thorndike & Thorndike-Christ, 2010).
The EFA addressed both the construct validity of the instrument as well as its underlying component structure. After the preliminary analyses, they completed an analysis of the structure of the correlations and applied the maximum likelihood extraction method to allow for a range of indexes of goodness of fit to the model (Costello & Osborne, 2005). They then applied the direct oblimin oblique rotation, which calculates the degree of skewness of the factors based on the delta parameter (López-Aguado & Gutiérrez-Provecho, 2019). They determined factor retention after considering the initial eigenvalues, scree plot visualization, and factor matrices (Pallant, 2016; Pinckney-Lewis, 2021).
In this case, not only is the sample size sufficient (n = 237 completed at 100%, n = 244 with PBNAPS scores), but the participant-to-item ratio is sufficient and exceeds the 10 participants per each item minimum (López-Aguado & Gutiérrez-Provecho, 2019; Pallant, 2016). Within the individual item correlation matrix, 20 of the 25 PBNAPS items maintained an absolute value correlation of r = .30 or greater with at least one other item (López-Aguado & Gutiérrez-Provecho, 2019; Pallant, 2016; Tabachnick & Fidell, 2007). The Kaiser–Meyer–Olkin measure of sampling adequacy was meritorious at .89. Also, the Bartlett’s Test of Sphericity was significant, χ2(300) = 2,591.81, p < .00), suggesting the sample correlation matrix is significantly different than an identity matrix and, therefore, appropriate for factor analysis (López-Aguado & Gutiérrez-Provecho, 2019; Pallant, 2016; Pinckney-Lewis, 2021; Tabachnick & Fidell, 2007). In summary, these preliminary analyses show the data were adequate for further EFA (López-Aguado & Gutiérrez-Provecho, 2019; Pallant, 2016; Pinckney-Lewis, 2021).
RESULTS
Participants
After 84 respondents were eliminated from analysis due to not completing a substantial portion of the PBNAPS, 265 respondents were included. They represented a diverse set of organizational contexts, affiliation types, and years of affiliation. The exact figures are provided in Table 3.
Internal Consistency
With all PBNAPS items, including those optional items for respondents reporting a second facilitator (n = 28 due to listwise deletion), the scale showed good internal consistency and reliability (α = .86) (DeVellis, 2017; Pallant, 2016; Thorndike & Thorndike-Christ, 2010). Based on this calculation, the proportion of total variation on the PBNAPS that can be attributed to the construct of perceive burden and not error is .86 (DeVellis, 2017). When excluding the repeated items for a second facilitator (n = 235), the scale’s internal consistency increased (α = .87). The current PBNAPS also showed varying but improved degrees of internal consistency within its individual subscales (PDOR, α = .53; POC, α = .68; PPS, α = .84; PSSP, α = .83). Whereas the PDOR and POC subscales were not as internally consistent as the PPS and PSSP subscales, we argue their acceptability for the following reasons: (a) alpha coefficients below .70 are common with subscales having fewer than 10 items (Pallant, 2016); (b) both subscales did have a good number of mean interitem correlations; (c) each subscale’s internal consistency can be improved to α = .59 and α = .76, respectively, by eliminating an item; and (d) no high-stakes decisions will be made regarding the individuals responding to the PBNAPS or to the corresponding facilitators (Pinckney-Lewis, 2021).
Correlations Between the Subscales and the Overall PBNAPS
Each of the subscales was positively correlated with the overall PBNAPS measure. Whereas the PDOR subscale had a large, positive correlation with the overall PBNAPS scores, r(242) = .53, p < .01, it represents the smallest relationship of the subscales to the total measure. The POC subscale had the largest, positive correlation with the overall PBNAPS scores, r(241) = .73, p < .01. The PPS subscale had the next largest positive correlation with the overall PBNAPS measure, r(242) = .67, p < .01, followed by the PSSP subscale, r(242) = .65, p < .01. For these analyses, all PPS and PSSP items were included across both iterations while leveraging pair-wise deletion (Pinckney-Lewis, 2021).
Correlations Among the PBNAPS Subscales
There were also positive, significant correlations among most of the subscales themselves. The PDOR subscale had a medium-sized positive correlation with the POC subscale, r(242) = .44, p < .01. The PDOR subscale had a small positive correlation with the PPS subscale, r(242) = .15, p < .05. The PDOR subscale was not significantly correlated with the PSSP subscale, r(242) = .10, p = .13. The POC subscale had a medium positive correlation with the PPS subscale, r(241) = .39, p < .01, and with the PSSP subscale, r(241) = .32, p < .01. Finally, the PPS subscale had a large positive correlation to the PSSP subscale, r(242) = .80, p < .01 (Pinckney-Lewis, 2021). Table 4 summarizes these data. These results are favorable and suggest the PBNAPS has external validity.
Underlying Component Structure of the PBNAPS
Within the EFA, initial eigenvalues showed six components with values above 1.0. However, the scree plot showed most of the variation explained by a smaller number of components. Therefore, the authors leveraged the pattern matrix to make final determinations. Table 5 provides the results of the EFA, which yielded a four-factor solution, each with three or more items loading, which explained 52.27% of the total variance in perceived burden (Pinckney-Lewis, 2021). The Harman Single Factor test yielded 31.1% variance explained by a single factor model, which confirms an absence of common method bias. Seven items were removed due to low factor loadings. After examining how the 18 retained items loaded onto components and the constructs they represent, the authors assigned the following factor labels based on the construct of perceived burden: perceptions of needs assessment facilitators in relation to individual participants, perceptions of needs assessment facilitators in relation to the organizational context, perceptions of other commitments in relation to the needs assessment experience, and perceptions of task responsibility/energy. Figure 4 provides a visual representation of this final model.



Citation: Performance Improvement Quarterly 37, 3; 10.56811/PIQ-21-0042
DISCUSSION
The PBNAPS was created because there was no scale yet in existence to measure the perceived burden experienced by participants and stakeholders in NAs. It proved to be a valid, reliable tool with several implications for NA facilitators and the fields of ID and HPT. Not only did the PBNAPs prove internally consistent and valid, but it also proved to be efficient and was quickly completed by most respondents. Whereas 92% of respondents completed the PBNAPS in 20 minutes or less, 80% completed it in 10 minutes or less. With the elimination of items within the EFA, the PBNAPS will take even less time to complete. In studying the burdens that NA participants face, the PBNAPS should not cause undue burden. NA practitioners should feel confident in implementing the PBNAPS as is described without fear of negatively impacting their stakeholders (Pinckney-Lewis, 2021).
Temporal Considerations
When and how should NA facilitators implement the PBNAPS? From a temporal perspective, the PBNAPS should be deployed either at the conclusion of the NA or some time thereafter but no more than 1 month post–NA conclusion. It is important for the PBNAPS to be deployed while the respondents’ memory of their NA experience is still predominant. Waiting any longer than that may cause respondents to confound the NA process with the deployment of resulting interventions. The goal is for respondents to distinguish between the two and isolate their NA experience when responding to the PBNAPS (Pinckney-Lewis, 2021).
However, the intention is not to deploy the PBNAPS in conjunction with every NA that is conducted. That would be counterproductive and potentially increase the levels of perceived burden of the NA stakeholders. Instead, the PBNAPS should be used as a spot-check for practitioners to gain insight into and reflect on their own practice. Practitioners should leverage the PBNAPS either within a set periodicity (i.e., once a year with a sample of their NA participants), after piloting a new methodology within their NA practice, or after engaging in a new organizational setting. This information will be useful in helping practitioners determine whether their approaches were equitable given the burden their participants report. NA facilitators must then adjust their practice to ensure perceived burden levels remain optimally low (Pinckney-Lewis, 2021).
Social Consequences of the PBNAPS
With every instrument, there are intended as well as unintended social consequences that come into play with their use (Cronbach, 1988; Lissitz & Samuelsen, 2007; Messick, 1989; Thorndike & Thorndike-Christ, 2010). The PBNAPS is meant to provide facilitators with feedback into their own practice. However, it is not meant to be punitive or have adverse consequences for NA facilitators or their respondents. It is also not meant as a decisional, high-stakes tool, but should only be used for personal practitioner reflection and process improvement. Much care must be given to how the PBNAPS is operationalized to avoid these unintended consequences (Pinckney-Lewis, 2021).
FUTURE RESEARCH
The PBNAPS is still within its infancy. This research helps to establish a presence within the NA literature on this topic. However, replication of the research and further trials of the PBNAPS are needed. Any future research will help to establish a more prominent presence within the literature, which will also continue to build out the construct of perceived burden.
Globalization and Accessibility
The ultimate goal is for the PBNAPS to be applicable across settings and across ability types. However, one of the major limitations of the current research is that it did not explicitly address or account for either of these important considerations. None of the PBNAPS items are intended to solely apply to a highly educated and English-proficient audience. More consideration should be given to the inherent diversity within the potential PBNAPS respondent population. Just as NAs themselves should take a globalized perspective and display cultural sensitivity (Watkins & Altschuld, 2014), so too should the PBNAPS. Future iterations of such research could benefit from beta testing with a more intentionally linguistically and culturally diverse sample of the target population (Pinckney-Lewis, 2021).
Limitations to the Factor Analysis
Some general limitations of factor analysis studies include the fact that items or entire measures may not have been created to reflect the constructs, as theorized (Costello & Osborne, 2005; Pallant, 2016). Based on the final component model, a more nuanced look at the construct of perceived burden is warranted. Also, potentially true within this data set, there may have been too few items to represent the underlying construct dimensions. In each of the subscales, the number of items ranged from six to seven. This provides a relatively small pool of items from which to examine the dimensionality of a construct as broad as perceived participant burden (Costello & Osborne, 2005; Pallant, 2016; Thorndike & Thorndike-Christ, 2010). Future iterations of the PBNAPS should include additional items within those components with only a few items loaded to ensure a more balanced probe into each of the subcomponents (Pinckney-Lewis, 2021).
CONCLUSION
The PBNAPS is the first of its kind to examine the NA participant experience in an explicit and deliberate way. It can and should be operationalized as a valuable, reliable instrument to measure the amount of perceived burden experienced by NA participants. However, it does require some revisions and replication based on the final component model. As it continues to be refined and validated over time, the tool can also provide valuable feedback to practitioners and ID/HPT interventions. Obtaining a better sense of how perceptions of burden affect NA processes and outcomes can help practitioners further determine how to go about their work.

Development of the Revised Perceived Burden in Needs Assessment Participants Survey (from Pinckney-Lewis, 2021)

Items Finalized in the PBNAPS (Pinckney-Lewis, 2021)

Summary of Change in PBNAPS Item Directionality

Visualization of the Final Factor Model (from Pinckney-Lewis, 2021)
Contributor Notes
KIM PINCKNEY has more than 20 years of experience in the training and education fields. Dr. Pinckney is a former Spanish educator who has also served as an instructional designer and performance improvement consultant within academia, industry, and several government and intelligence community spaces. In 2020, she founded KP Solutions & Consulting, LLC, where she provides instructional design, performance improvement, and special education advocacy services. Currently, she is an associate director within the New Jersey Education Association Professional Development and Instructional Issues Division. Her research interests include exploring the intersections between adult learning theories, instructional practices for neurodivergent and disabled populations, digital-age technology demands, needs assessment and evaluation best practices, and maximizing knowledge transfer. She earned her bachelor’s degree in Spanish language and literature from Swarthmore College; master’s in second language acquisition and application from the University of Maryland, College Park; and doctorate in instructional design and technology from Old Dominion University. Email: kimpinckney.solutions@gmail.com
R. JASON LYNCH serves as an assistant professor of higher education in the Reich College of Education at Appalachian State University as well as the founding executive editor for the Journal of Trauma Studies in Education. His research uses quantitative, qualitative, and mixed methods approaches to better understanding the impacts and implications of traumatic stress within education organizations with a specific focus on secondary trauma, trauma-informed leadership, and organizational trauma. He earned his bachelor’s degrees in biology and psychology from the University of North Carolina, Wilmington, master’s in higher education administration from North Carolina State University, and doctorate in higher education from Old Dominion University. Email: lynchrj@appstate.edu
This work complies with ethical standards. The authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or nonfinancial interest in the subject matter or materials discussed in this manuscript. All procedures performed in this study involving human participants were in accordance with the ethical standards of Old Dominion University (1536802-2). Informed consent was obtained from all participants included in the study. All participants included within the study also provided informed consent regarding publishing their data.


