Editorial Type: editorial
 | 
Online Publication Date: 18 Jul 2025

EDITOR’S NOTES—REIMAGINING THE PERFORMANCE ANALYST: ETHICAL INTELLIGENCE, AI FLUENCY, AND THE FUTURE OF IMPROVEMENT: A PERSPECTIVE FROM A PERFORMANCE ANALYST

Article Category: Editorial
Page Range: 105 – 107
DOI: 10.56811/Turner-Editorial-37-3
Save
Download PDF

Performance has evolved to a construct that is more than just a measurement, it must also be interpreted. Interpretation demands information (data) and cognitive processes (conscious and unconscious). Previously, the role of a performance analyst involved gathering metrics, running time studies, and building dashboards rooted in standardized key performance indicators (KPIs). Today, that same analyst must navigate a complex web of Artificial Intelligence (AI)-generated insights, workforce algorithms, ethical concerns, and digital platforms that both quantify and influence human behavior. In short, the definition of performance, and the people responsible for evaluating human performance, has fundamentally changed.

Organizations are increasingly relying on AI and machine learning (ML) techniques to drive human capital decisions. Today’s performance analyst is a central organizational agent responsible for ensuring that all systems operate ethically, transparently, and inclusively. Since the introduction of AI-enabled platforms like Workday, Oracle Cloud HCM, SAP SuccessFactors, and ADP’s DataCloud, the performance landscape has transformed (Oracle, 2024; Workday, 2024). These new systems offer predictive insights on everything from attrition risk to performance reviews, and yet the interpretation of these outputs still hinges on human understanding, judgment, and contextual knowledge. Today, the analyst’s job is no longer one of measuring; but one that has evolved to sensemaking and interpretation.

This evolution requires dual literacy: the ability to speak the language of data and the capacity to question its assumptions. As Turner (2024) wrote in his editorial on the survival of performance improvement (PI) and human performance technology (HPT), relevance depends on how a field updates its knowledge base in response to external developments. For the performance analyst, this means keeping pace with the accelerating integration of AI while maintaining fidelity to core human values such as fairness, transparency, and equity. In practical terms, AI systems embedded in HR platforms are transforming how organizations evaluate individual and team performance. These platforms offer users to utilize natural language processing (NLP), machine learning, and sentiment analysis to assess communication effectiveness, project success rates, and even collaboration dynamics.

New concerns have been raised as these systems are rolled out and become embedded into today’s organizational systems. Cautionary concerns similar to, as SHRM’s (2024) warning relate to how these systems can inadvertently reinforce existing biases if they are not regularly audited and contextualized. For example, an AI tool might correlate frequent email communication with strong team performance, overlooking cultural or neurodivergent differences in communication style. Another example could be AI tool assessing performance while not taking into consideration systemic barriers faced by a marginalized workforce (Patole et al., 2025). Without ethical scrutiny, such insights can become instruments of exclusion rather than improvement.

The ongoing Mobley v. Workday lawsuit serves as a pivotal case study for performance analysts operating within AI-integrated HR environments. In May 2025, the U.S. District Court for the Northern District of California granted preliminary certification for a collective action under the Age Discrimination in Employment Act (ADEA), allowing the case to proceed on behalf of applicants aged 40 and over who were allegedly disadvantaged by Workday’s AI-driven hiring tools (Fisher Phillips, 2025). The plaintiffs contended that Workday’s algorithmic systems disproportionately screened out older applicants, raising significant concerns about potential biases embedded within AI recruitment technologies.

The Mobley v. Workday lawsuit exemplifies the broader imperative across the Human Resources spectrum including performance analysts to engage in critical evaluation and ethical oversight of AI technologies. Practitioners must ensure that these systems are regularly audited for fairness and compliance with anti-discrimination laws. Moreover, the court’s consideration of Workday’s potential liability under an “agency” highlights the broader accountability of AI vendors in the employment decision-making process (Nelson Mullins, 2024; Seyfarth Shaw LLP, 2024).

Performance analysts, therefore, play a crucial role in bridging the gap between technological capabilities and ethical employment practices, advocating for transparency and inclusivity in AI-driven HR solutions. To be effective in the new terrain of emerging technologies in performance management, analysts must develop fluency in both the technical underpinnings of AI systems and the ethical frameworks that guide their use. LinkedIn Learning (2024) and McKinsey’s State of AI report (McKinsey & Company, 2023) highlighted that organizations now expect performance analysts to not only interpret AI outputs but also guide their responsible use. This calls for competencies such as data ethics, algorithmic bias identification, critical inquiry, and systems thinking; skills traditionally underemphasized in performance improvement training.

Furthermore, analysts must now work across disciplines. The days of working in isolation with spreadsheets are coming to a close. Today’s performance analyst collaborates with IT architects, equity and inclusion officers, HR business partners, and data scientists to ensure that performance metrics are meaningful, equitable, and contextually grounded. In this role, analysts function as translators between systems and people, identifying where metrics reflect reality and where they mask deeper issues of access, privilege, or inequity.

This multidisciplinary pivot is particularly evident in organizations undergoing digital transformation. In such environments, analysts are often tasked with helping leaders understand how AI outputs align or conflict with organizational values and human experience. A dashboard might indicate declining productivity in a hybrid team, but the analyst must ask: Is this due to actual performance shifts, or a result of proximity bias in how data is gathered and interpreted (Choudhury et al., 2021)? In this way, the analyst's role is not passive but critical, not just technical but ethical.

The concept of “power skills” (McWhorter & Bennett, 2024) becomes especially salient here. These are not soft skills in the traditional sense but rather core competencies that allow analysts to engage with ambiguity, lead difficult conversations, and advocate for inclusive practices. Empathy, storytelling, and systems thinking are now central to performance analysis, not just adjunct to it.

Unfortunately, many HR analytics training programs and performance improvement curricula have not kept pace with these shifts. They continue to emphasize traditional ROI calculations, productivity metrics, and individual-level interventions without interrogating the broader socio-technical systems in which performance takes place. This gap between evolving practice and stagnant pedagogy represents a serious threat to the field’s relevance. As editors and reviewers of Performance Improvement Quarterly (PIQ), we have an opportunity and responsibility to help bridge this gap. PIQ must actively curate research and commentary that reflects the new complexities of performance analysis in AI-enhanced workplaces. This includes:

  • Encouraging interdisciplinary research that blends ethics, technology, and HRD.

  • Publishing conceptual frameworks that redefine performance in hybrid, remote, and AI-mediated environments.

  • Prioritizing submissions that center equity, inclusion, and social identity as core components of performance.

  • Expanding the reviewer pool to include experts in (1) digital ethics, (2) organizational psychology, and (3) human–AI teaming.

As performance improvement specialists, we must also ask ourselves whether the metrics we use to assess research are aligned with the values we espouse. Are we prioritizing clarity over complexity? Are we rewarding innovation or merely replicability? The future of performance improvement depends on our willingness to embrace intellectual risk and methodological diversity. On these lines, I offer a call to scholars and practitioners alike: Expand your definition of what counts as performance. Please go beyond the dashboards. Interrogate the algorithms. Center the human experience. And perhaps most importantly, remember that data is never neutral. It always reflects the priorities, blind spots, and assumptions of the systems that generate it.

Performance is no longer just about achieving outcomes, it is about understanding the processes, values, and relationships that shape those outcomes. In this way, the role of the performance analyst is not diminishing in the face of AI. It is becoming more essential than ever. As an Associate Editor of PIQ and a practitioner in workforce analytics, I believe the time has come to reimagine performance analysis as a deeply human, deeply ethical, and deeply interdisciplinary practice. I look forward to working with our scholarly community to advance this vision.

Reviewers Needed

Our goal, as editors of PIQ, is to grow and advance the journal’s reach to various disciplines, industries, and markets. We also look to provide an outlet of the field’s knowledgebase. However, to accomplish this goal, the journal needs continued support from existing reviewers and the addition of new reviewers to the peer review team. If you are interested in reviewing for PIQ submissions, please create an account and sign up as a reviewer athttps://www.editorialmanager.com/piq/default.aspx.

New submissions from the performance improvement communities that meet the minimum requirements, as highlighted in previous editorials in this journal (Turner, 2018a, 2018b, 2018c, 2019a, 2019b), are encouraged to submit their research. If interested in having your manuscript considered for publication in PIQ, present your research study after reviewing the minimal requirements highlighted in the previously mentioned editorials as well as reviewing the author guidelines at https://ispi.org/page/PIQuarterly

Note to Current Reviewers

Peer review is necessary for a journal’s success and reputation. We thank our current reviewers for their time and dedication to PIQ. We appreciate the support from our reviewers and need to continue growing the number of active reviewers for the journal. After switching to a new article management system, we need reviewers to create an account in the new system (see link provided above) and to enter their profile information. We are unable to search and contact reviewers if their details are not in the new system. We ask all reviewers who are interested in continuing to support PIQ to sign-in to the new system and update your profile information.

We appreciate your service to the field and the journal (PIQ), and thank you for signing into the new system.

Copyright: © 2024 International Society for Performance Improvement. 2024
  • Download PDF