THE ROLE OF ETHICAL AND TRUSTWORTHY AI TEAMMATES IN ENHANCING TEAM PERFORMANCE: A SYSTEMATIC LITERATURE REVIEW
This systematic literature review (SLR) examined the influence of ethical and reliable AI teammates on improving team performance in Human-AI teams (HAITs). The review synthesized 37 peer-reviewed papers to investigate how transparency, explainable AI (XAI), and ethics cultivate trust, an essential component for effective human-AI collaboration. Ethical AI teammates enhance team dynamics by mitigating uncertainty, guaranteeing equity, and fostering transparency in decision-making. Nonetheless, significant challenges exist in trusting AI teammates due to obstacles such as the “black box” nature of AI teammate representing the lack of transparency and trust violations. Trust restoration methods, such as explanations and trusting AI teammates with caution, are crucial for reinstating trust following breaches. The study concluded by highlighting the implications for enhancing team performance through ethical and trustworthy AI teammates, adding to the existing literature on human-AI collaboration.
Trust is the cornerstone of human-AI collaboration, directly influencing team performance and cohesion.
Ethical AI systems, designed with transparency and fairness, are pivotal for building trust and improving team dynamics.
The Future of Work (FoW) necessitates extensive collaboration and interdependence between humans and artificial intelligence (AI) teammates for achieving shared goals (Zhang et al. 2023). With the continual evolution of contemporary work environments (Georganta and Ulfert, 2024), researchers in team dynamics and human–AI interaction anticipated that AI agents will soon be fully integrated as teammates (Seeber et al., 2020). A human-AI team (HAIT) is characterized by a composition of at least one human and one artificial agent (AI teammate), wherein the members are interdependent, have a shared goal, and exhibit a considerable level of autonomy (Demir et al., 2020; O’Neill et al., 2022).
The essential factor for the success of HAITs is the trust established between human and AI collaborators (Georganta & Ulfert, 2024). As trust continues to be a primary concern, the potential of an AI teammate to improve team performance faces challenges (Dennis et al., 2023). Trust is acknowledged as a crucial element in effective collaboration, impacting team performance when engaging with both human and AI counterparts (Ulfert et al., 2023).
Achieving high performance requires overcoming substantial trust barriers towards effective collaboration in HAITs. Nevertheless, despite progress, significant hurdles exist hampering complete synergy of humans and AI teammates. Hence, there is a need for deeper understanding of trust and other collaboration mechanisms within human-AI partnerships (Ulfert et al., 2023).
Ulfert et al. (2023) underscored the significance of trust in human-AI interactions as highlighted in both psychology and computer science research. Yet, the trust between human and AI teammates poses distinct challenges. As AI researchers advance the autonomous capacities of AI teammates, human perceptions of AI trustworthiness are crucial for their effective inclusion into teams (Zhang et al. 2023).
Malik et al. (2022) highlighted that despite a voluminous literature on trust, it is not clear how to build trust in AI teammates as a resource for business efficiency and innovation. Trust in AI teammates is developed through mechanisms such as transparency, reliability, and compliance with ethical standards (O’Neill et al. 2022). Trust has been extensively examined in multiple fields, such as interpersonal interactions and human-automation interaction (Mayer et al., 1995).
Trustworthiness is a pivotal aspect in assessing successful human-AI collaboration, a concept that holds equal significance in evaluating the willingness to collaborate with both human and AI team members (Dennis et al., 2023). However, the specific factors affecting the willingness to work with AI teammates are inadequately understood (Dennis et al., 2023). The significance of ethics and explainable AI (XAI) is growing, as these elements foster trust by rendering the decision-making processes of AI teammates more transparent and comprehensible to their human counterparts (Zerilli et al. 2022).
Studies indicate that human team members often impose social expectations on AI teammates, expecting behavior, communication, and comprehension akin to those of humans (Schelble et al., 2024; Zhang et al., 2021). This expectation imposes more requirements on AI teammates, necessitating that they not only execute tasks but also conform to societal standards of collaboration and reliability. Considering the rapid advancement of AI technologies and their increasing integration into teams, it is essential to examine the dynamics between AI teammates and their human counterparts to investigate the interplay of trust, ethics, and performance in HAITs.
Despite the growing popularity of AI, no comprehensive assessment of the literature has been undertaken to combine findings on AI teammates, performance, and trust within the workplace environment. The goal of this research was to examine the existing research on the influence of ethical and trustworthy AI teammates on improving team performance. Therefore, the current review aimed to systematically address two core research questions (RQs):
RQ 1: How do ethical and trustworthy AI teammates contribute to improving team performance?
RQ 2: What barriers hinder trust in AI teammates and how can trust be effectively restored upon violation?
The first RQ aims to investigate how AI teammates, developed with ethical principles and openness, might enhance trust and elevate team performance. The second RQ investigates the potential hazards associated with losing trust in an AI teammate, importance of cautiously trusting an AI teammate, and measures to restore trust after ethical violations or system failures caused by AI teammates. The significance of these inquiries transcends the local setting of HAITs.
As AI increasingly integrates into collaborative work settings, businesses must guarantee that AI counterparts are not just technically proficient but also reliable allies that enhance the team's overall performance. This study contributes to the expanding literature on human-AI collaboration and provides practical insights on how organizations can utilize AI teammates’ capabilities while preserving the essential human element of trust, thus improving overall team performance and cohesion.
This study commenced with an introduction to the notion and intricacies of trust, succeeded by a comprehensive examination of the research methods employed for the review. Subsequently, it analyzed fundamental concerns concerning the function of AI collaborators in improving performance, especially via the perspective of ethical and reliable AI. It subsequently examines the difficulties of cultivating confidence in AI teammates and their influence on team performance. The paper concluded with a discussion of the implications for practice and research, an examination of the study’s shortcomings, and recommendations for future evaluations.
METHOD
This study conducted a systematic literature review (SLR) to synthesize existing research on teams with AI teammates and their performance. Systematic literature review evaluations offer a summary of the existing knowledge in an area, enabling the identification of future research goals (Page et al., 2021, p. para. 1). A systematic literature review (SLR) is a method for locating, evaluating, and interpreting all available research on a specific topic, research question, or phenomenon (Kitchenham, 2004).
Systematic literature review (SLR) research combines the advantages of an exhaustive search methodology and critical evaluation, as they encompass diverse study designs instead of concentrating on a singular design (Grant & Booth, 2009). Therefore, SLR possess similarities with integrative literature reviews (Torraco, 2016) and scoping reviews (Wang, 2019). This research employed the parameters established by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol (Moher et al., 2009), as illustrated in Figure 1. Moher et al. (2009) define a systematic review as “a review of a clearly formulated question that employs systematic and explicit methods to identify, select, and critically appraise relevant research, as well as to collect and analyze data from the studies included in the review” (p. 1).



Citation: Performance Improvement Quarterly 37, 2; 10.56811/PIQ-24-0039
The Search and Review Protocol
Given the necessity of gathering interdisciplinary insights due to the fast-paced developments in the applications of AI systems (Rodgers et al., 2023), this study utilized the following principal databases: (a) the Scopus database, recognized for its comprehensive coverage of peer-reviewed journals across diverse academic fields (Fahimnia et al., 2015). Scopus is among the largest databases for peer-reviewed scholarly literature, (b) Web of Science (WoS), also referred to as Clarivate (Collins et al., 2021). These databases offer an intuitive interface for searching articles based on several parameters, including publication year, document type, keywords, language, source type, source title, and subject area.
The search was conducted within the title, abstract, and keywords to identify potentially relevant publications. The principal objective of this study was to examine the influence of AI teammates on enhancing team performance, while simultaneously addressing the significance of ethical and trustworthy AI. To guarantee that the search was “broad enough to capture the breadth of relevant literature”, only key terms were utilized as search criteria (Torraco, 2016, p. 418). The idea was to allow the natural inclusion of literature covering ethics and trust in HAITs and using broad search terms reduced the likelihood of unintentionally overlooking significant studies (Torraco, 2016).
Consequently, search terms aligned with the study’s objectives were utilized to encompass the widest possible studies related to AI teammates, performance, and teams. Such a wide search avoided overlooking any significant studies that could inform different facets of AI’s role in enhancing team performance. Subsequently, focus on specific aspects such as trust, and ethics was applied in refining literature.
The study employed search phrases like “AI” OR “Artificial Intelligen*”, with the operator ‘*’ facilitating the retrieval of any relevant variations such as artificial intelligence or artificial intelligent. Additional search terms ‘team’ and ‘performance’ were used to capture the literature commenting on performance in HAITs. A search query combining all search terms was formulated using ‘AND’ Boolean search operator.
Inclusion and Exclusion Criteria
The present review employed a set of inclusion and exclusion criteria consistent with SLR techniques (Kitchenham, 2004; Moher et al., 2009) to guarantee that all records identified in the search contributed to the study's purpose. The inclusion criteria for review were: (a) articles in English, (b) articles published through a peer-reviewed process, (c) publications from 2013 or later to reflect latest advancements in AI teammates and performance over the past decade, and (d) studies conducted in a workplace context.
The exclusion criteria were as follows: (a) articles not published in English; (b) the review limited the evidence to peer-reviewed academic journals to guarantee data quality and methodological rigor (David & Han, 2004; Light & Pillemer, 1984), thereby excluding non-peer reviewed literature such as books, book chapters, theses and dissertations, conference proceedings, and essays. (c) Articles published before January 2013 were omitted to concentrate on the most recent research in the rapidly evolving field of AI from 2013 to the date of the search. (d) The primary research focus of the article did not pertain to AI teammates, performance, and HAITs.
The preliminary search produced 1304 items from Scopus and 569 from Web of Science. After the removal of duplicates, 683 articles persisted. A three-step procedure was employed to discover papers that satisfied the study’s inclusion and exclusion criteria. To begin with the screening of titles, keywords, and abstracts of the papers to exclude research that did not conform to the established criteria. Secondly, the eligibility for the study was determined with assessment of the 683 articles obtained from the previous step. For instance, the study excluded articles that did not address AI teammates, HAITs, performance, teams, or the workplace. The result following this elimination phase was a finer collection of 91 articles. Thirdly, the inclusion and exclusion criteria were derived from the previous step resulting in the removal of an extra 59 items. To refine further, articles predominantly addressing AI teammates, HAITs, performance, ethics, and trust were narrowed down resulting in 32 relevant articles.
To guarantee that no pertinent items were overlooked, a manual backward and forward search was conducted (Thomas et al., 2022), as an additional step to the database search. For the backward search, the references of each selected article were examined, while for the forward search, all publications citing the selected articles were reviewed. Implementing this step prevented the exclusion of pertinent research publications that did not satisfy the keyword requirements. The manual procedure contributed to five supplementary articles, resulting in a total of 37 articles to be considered for data analyses. The final list of articles ranged from multiple disciplines, including The International Journal of Human-Computer Studies, Journal of Management Information Systems, Human Factors, European Journal of Work and Organizational Psychology, Computers in Human Behavior, Behaviour and Information Technology, and Frontiers in Psychology. Refer to the references denoted with an asterisk for a compilation of the 37 articles included in this study.
Data Extraction and Analysis
As a part of data analysis, a concept matrix was employed (Webster and Watson, 2002) to extract and systematically arrange data from the articles. In addressing the first research question on identifying how AI teammates optimize the performance in HAITs, three categories were identified: the role of ethics, explainable AI (XAI), and collaboration practices with AI teammates. Similarly, to answer the second research question regarding the barriers in trusting AI teammates, three categories were created: understanding the challenges in trusting AI teammates, trust repair mechanisms upon breach of trust by AI teammates and building cautious trust AI teammates.
Next section examines the role of AI teammates in enhancing team performance, focusing on critical elements that impact collaboration and productivity. I begin by exploring foundational insights on human-AI partnerships, then delve into key factors that drive effective performance. A significant part of this review is dedicated to the role of trust in human-AI dynamics, addressing how trust influences acceptance, reliability, and mutual support between human and AI teammates. Together, these insights offer a comprehensive understanding of how AI can be integrated into team environments to support and elevate overall performance.
THE ENTANGLEMENT OF TRUST IN HUMAN AND AI TEAMMATES
Trust in the team literature is defined as the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party (Schelble et al., 2024; Textor et al., 2022; Ulfert et al., 2023; Zhang et al., 2023). Additionally, trust is acknowledged as a multifaceted and complex construct (Dennis et al., 2023; Georganta & Ulfert, 2024; Schelble et al., 2024). According to Textor et al. (2022), trust in an AI teammate is determined by three principal factors: performance (the tasks the AI collaborator accomplishes), procedure (the methodology employed by the AI to execute its tasks), and purpose (the rationale for the AI collaborator’s creation).
Dennis et al. (2023) contended that trust is applicable to technologies similarly to its application to individuals, proposing that an AI teammate might be considered trustworthy based on its reliability and the degree to which it proves to be worthy of trust. Trust in an AI teammate is established through perceptions and interactions, yet is also shaped by cognitive biases, skepticism, and irrational influences (de Visser et al., 2018). Cognitive trust, emphasizing a trustee's competencies and performance, contrasts with affect-based trust, which prioritizes social, interpersonal, and emotional dimensions (Zhang et al., 2023).
Sometimes trust is considered unnecessary outside social ties, emphasizing its intrinsically emotional character inside the teams (Zhang et al., 2021). Thus, trust encompasses both cognitive and affective elements, constituting a crucial aspect of team dynamics. A newly added AI teammate is deemed trustworthy based on their perceived ability, integrity, and benevolence (Georganta & Ulfert, 2024).
Nevertheless, insufficient prior experience with AI teammates frequently results in ambiguity and misjudgment of the AI teammate's ability, integrity, and benevolence (Georganta & Ulfert, 2024). In a collaborative environment, the aspect of ability can be categorized into task-related skills (e.g., competence) and team-related abilities (e.g., proactive behavior; Georganta & Ulfert, 2024). Ability includes the knowledge and skills needed for task execution as well as the interpersonal competencies essential for effective collaboration.
Integrity pertains to a teammate’s credibility, sense of fairness, ethical standards, and consistency (Georganta & Ulfert, 2024). Dennis et al. (2023) asserted that integrity entails unwavering adherence to acceptable norms and the transparent execution of activities, comprising two fundamental dimensions: consistency and transparency. AI teammates inherently exhibit consistency, as they typically yield equal outcomes when given the same inputs.
Transparency, as articulated by Dennis et al. (2023), is the commitment to values that govern one’s behavior. AI teammates, by their nature, demonstrate consistency, as they tend to produce the same results when provided with identical inputs. AI teammates are less prone to detrimental activities, including violating organizational ideals, participating in political maneuvers, or pursuing concealed personal agendas. In contrast to human collaborators, AI does not seek individual objectives or engage in competition with fellow team members. Concerns over fairness and equity in AI systems have emerged, especially in cases of biased performance and gender or ethnic biases (Dennis et al., 2023).
Benevolence, a crucial element of trust, encompasses politeness, consideration for the team, and an affirmative disposition towards team members (Georganta & Ulfert, 2024). Dennis et al. (2023) defined benevolence as the degree to which an individual endeavors to benefit others, irrespective of external incentives. Benevolence, as an intrinsic human trait, is crucial for fostering trust within teams. An AI teammate can exhibit kindness by being programmed to possess no hidden goals and to refrain from behaviors that unjustly advantage itself—attributes that are not consistently assured in human collaborators.
The benevolence and integrity of AI teammates can be affected by both design components and deployment methods, as these aspects may be influenced by users' perceptions of the organization (Dennis et al., 2023). Mayer et al. (1995) contended that a new teammate’s competence and integrity may be evaluated by rational analysis of objective criteria such as track records and performance history, whereas assessing kindness necessitates contact and emotional engagement. In contrast to competence and integrity, the benevolence of a new teammate is often more challenging to evaluate promptly.
The following section addresses the research questions by examining the identified categories: exploring how ethics, XAI, and collaboration practices with AI teammates contribute to enhancing performance in human-AI teams (RQ1), and investigating the challenges, trust repair mechanisms, and strategies for building cautious trust in AI teammates (RQ2).
ENHANCING PERFORMANCE WITH AI TEAMMATES
Ethics as the Core of Trustworthy AI Teammates
A research vacuum persists regarding the operation of the context of ethics in HAITs (Textor et al., 2022). Malicious AI includes AI systems that are used with harmful intent, either through the deliberate design to cause damage or by exploiting AI vulnerabilities (Brundage et al., 2018) caused due to deployment of AI systems without considering ethical safeguards, transparency, or fairness. The emergence of autonomous systems has elevated ethical considerations, especially when malicious AI has demonstrated considerable risks to humanity and security threats (Himmelreich, 2018). Consequently, researchers have intensified their efforts to establish ethical rules for AI, focusing on human values such as justice, fairness, privacy, non-maleficence, openness, and accountability (Jobin et al., 2019). Aspects such as reliability, safety, and trustworthiness (Shneiderman, 2020) are included into established ethical frameworks—namely virtue ethics, deontology, and consequentialist theories—to inform the creation of AI (Zhou et al., 2020).
Ethics is a complex philosophical discipline that encompasses several domains and is founded on numerous moral theories that examine the notions of right and evil (Textor et al., 2022). Prominent moral frameworks encompass utilitarianism, which prioritizes the maximization of collective well-being; deontology, which underscores the importance of adhering to duties and rules generally set by societal institutions; and virtue ethics, which focuses on the development of commendable character traits that transcend mere habits and are profoundly embedded in individuals. Investigations into ethics in interaction within HAITs significantly rely on established ethical theories at present (Palmer & Zakhem, 2001).
Schelble et al. (2024) demonstrated that unethical behavior negatively affects trust within individuals as well as within teams. Schelble et al. (2024) also indicated that humans evaluate an AI teammate's ethicality when determining its trustworthiness. Information sharing among team members can increase situational awareness and promote performance (Demir et al., 2017).
Andres (2012) and Cooke et al. (2000) stated that explaining an AI teammate’s decisions helps human colleagues build a shared understanding, making collaboration easier. The further posited humans may not consistently need an explanation, especially when they concur with the AI teammate’s judgment. Therefore, AI teammates need to possess customizable transparency strategies which are based on situations or contexts.
Ethics and trust involve a nuanced relationship, hence additional research is required to study the trust repair strategies following AI teammates’ ethical violations (Textor et al., 2022). While both ethicality and trust are influenced by ethical violations, only ethicality appears to be impacted by the nature of the violation. Violations of ethical values might adversely affect performance and trust (Parasuraman & Miller, 2004).
Autonomous AI teammates exhibit limitations in their communication strategies, particularly in scenarios with ethical uncertainty (Textor et al., 2022). Determining whether a decision is ethical is inherently subjective and often more complex than analyzing other performance criteria. Doris (1998) asserted that ethics is crucial in influencing expectations and evaluations of behavior.
Considering these expectations, AI teammates should cultivate and sustain mutual confidence with human collaborators across extended durations while working in complex scenarios (Schelble et al., 2024). This transition demanded a more profound investigation into the impact of actions on trust and ethics within HAITs. The development and deployment of autonomous systems have intensified worries over their ethical ramifications, especially in contexts where AI teammates necessitate significant ethical deliberation (Textor et al., 2022).
Explainable-AI (XAI) as the Trust Pillar in AI Teammates
In high-stakes environments like healthcare, inadequate human-AI collaboration may lead to life-or-death outcomes; therefore, it is essential for AI teammates to exhibit optimal levels of explainability, interpretability, and plausibility to facilitate effective teamwork (Bienefeld et al., 2023). Dennis et al. (2023) emphasized that the transparency and reliability of AI teammates substantially affect individuals' attitudes and performance. A key challenge in achieving XAI is the “black box” nature of many contemporary AI systems which complicate collaboration with the AI teammates due to lack of explainability in identification of patterns in data without predefined criteria (Bienefeld et al., 2023).
However, how AI performance affects perceptions of team processes and the willingness to work with AI team members remains unclear. Dennis et al. (2023) asserted that the perceptions of AI teammates serve as critical indicators of how they will be perceived in comparison to their human counterparts performing similar roles. For humans to effectively cooperate with AI teammates, human-AI interaction systems must employ XAI that allows for bidirectional communication (Chen et al., 2018). The primary objective of XAI is to establish trust between human and AI agents, ensuring appropriate reliance on the technology, with reliability being crucial for fostering human-machine trust, while also highlighting the significance of ethical considerations in trust calibration (Textor et al., 2022).
Human-AI interaction should facilitate the dissemination of information regarding the environment or situation, as well as explain the rationale behind decision-making (Chen et al., 2018). Recent technological breakthroughs illustrate the capability of robots to exhibit transferable teamwork competencies, which are essential for team success (Seeber et al., 2020). For instance, machines capable of mind reasoning capabilities could construct computational models of their counterparts through behavioral observation. The capability of AI teammates to derive the knowledge from human teammate allows it to utilize the information for decision-making and planning for further actions. Furthermore, these capabilities enhance implicit coordination between human and AI teammates.
Integrating the theory of mind models with human-explainable outputs through XAI, AI teammates can ascertain the appropriate timing and manner of communication with human collaborators, hence optimizing trust and cooperation (Stowers et al., 2021). The development of causal and counterfactual reasoning in AI is expected to create machines that act as truly adaptive teammates, capable of understanding and responding to changes in human-machine teaming (HMT) and the environment (Stowers et al., 2021). However, Textor et al. (2022) emphasized that users must not just look into the decision criteria of the AI, but also the ethical framework governing its operations. Perceiving an AI teammate as an opportunity can provide beneficial effects for the team, whereas viewing it as a danger may lead to adverse repercussions (Dennis et al., 2023). This is not an easy idea, as it demands interpretable explanations that do not place a significant cognitive burden on users.
Collaboration with Ethical and Trustworthy AI Teammates
Collaboration with ethical and reliable AI partners is essential for the success of HAITs, as trust profoundly affects outcomes including performance, trust calibration, and confidence (de Visser et al., 2018; McNeese et al., 2021; Schaefer et al., 2016). Trust calibration constitutes a vital component of human-AI interactions. The disagreement in the decision may cause humans to exhibit diminished trust in AI teammates during morally sensitive scenarios irrespective of AI teammate’s performance (Textor et al., 2022). In such cases, it can be difficult to accurately calibrate the trust aligning with the AI teammates’ true capabilities (McNeese et al., 2021).
Mis-calibrated trust either in the form of excessive or insufficient trust, can negatively impact the performance either due to over-reliance or heightened workload (Parasuraman & Manzey, 2010). Trust is fundamental in human-AI interactions, as human perceptions and reliance on autonomous technologies are significantly shaped by their trust in these systems (Schelble et al., 2024). As AI teammates assume more intricate responsibilities, the reliance between humans and AI teammates is steadily rising (Seeber et al., 2020; Ulfert et al., 2023).
The integration of AI into jobs traditionally occupied by humans, especially in virtual collaboration, raises burning questions regarding how AI teammates will impact the interactions and results. Virtual collaboration constitutes a highly interactive environment wherein trust, satisfaction, and conflict are essential determinants of team success (Dennis et al., 2023). The integration of AI into these workplaces may disturb interpersonal dynamics, requiring a thorough evaluation of AI implementation in collaborative positions.
The performance of AI teammates can affect human perceptions, potentially resulting in both favorable and unfavorable impacts on attitudes (Dennis et al., 2023). Studies indicated that AI assistance in collaborative tasks enhances user impressions of the system (Dennis et al., 2023). Moreover, trust is pivotal in the acceptance of technology, as demonstrated by the heightened utilization of AI systems when individuals have confidence in the machine's outputs (Jain et al., 2022).
Trust is a psychological condition wherein a trustor, be it a human or an AI teammate, is prepared to embrace vulnerability predicated on optimistic anticipations of reliability (Georganta & Ulfert, 2024). AI teammates can utilize diverse interaction modalities (e.g., text or speech), demonstrate varying degrees of autonomy (e.g., partially or totally autonomous) (Dennis et al., 2023). AI teammates can also adopt distinct roles (e.g., submissive or collaborative), all of which can influence team dynamics (Dennis et al., 2023).
Etiquette in human-automation interactions is crucial for performance outcomes. Poor etiquette can adversely affect team performance, whereas nice etiquette can mitigate the technological deficiencies of automation (Textor et al., 2022). Nonetheless, human interactions with machines frequently elicit heightened anxiety, resulting in individuals exhibiting reduced openness and increased agreeableness compared to interactions with other humans (Jain et al., 2022).
Team member satisfaction is a crucial metric of success, impacting both sustained performance and the utilization of collaborative technologies (Dennis et al., 2023). Furthermore, the perceptions of team members are frequently influenced by the performance of their colleagues, including AI teammates. The performance of AI affects human evaluations; superior performance typically yields favorable evaluations, whereas poor performance leads to adverse evaluations (Dennis et al., 2023).
Performance Gains with Trustworthy AI Teammates
Schelble et al. (2024) observed that trust in human-machine interaction is frequently associated with performance outcomes. Flathmann et al. (2023) emphasized that human-AI teaming is an expanding field of study, supported by empirical evidence indicating that such teams can improve workforce performance. In addition to performance, AI teammates can enhance essential team characteristics, such as trust and shared understanding, enabling teams to attain superior performance through heightened efficiency.
Schelble et al. (2024) asserted that elevated trust levels among teammates are crucial for cultivating mutual understanding and team cohesion—fundamental components that influence behavioral indicators of trust and ultimately result in high-performance outcomes. In human-machine interaction, trust is often linked to performance results. Dennis et al. (2023) posited the importance of trust in virtual teams, as it decreases transaction costs by allowing team members to participate less in self-protective activities.
High levels of trust among team members are essential for cultivating fundamental elements of trust, including mutual understanding and team cohesion. These foundational factors then result in reflecting behavioral markers of trust, which eventually enhance team performance results (Schelble et al., 2024). Trust improvises confidence and security, promoting transparency, enhancing information flow, and enabling team members to undertake risks predicated on the anticipated behaviors of others (Dennis et al., 2023). Schelble et al. (2024) underscored that perceptions of others’ ethical conduct substantially affect trust, a relation that has been thoroughly examined in human-human partnerships but not in HAITs.
BARRIERS HINDERING TRUST IN AI TEAMMATES
Understanding Challenges in Building and Sustaining Trust in AI Teammates
Georganta and Ulfert (2024) asserted that the efficacy of HAITs is significantly influenced by the degree of trust established between human and AI collaborators. Trust is an essential determinant of success in both human-human and human-AI partnerships. Nonetheless, the cultivation of trust among human teams continues to be a remarkable challenge. The integration of an AI teammate introduces a new dynamic, requiring team members to cultivate trust in the new AI teammates (Georganta & Ulfert, 2024).
Establishing trust is crucial for promoting good long-term collaboration in HAITs (Ulfert et al., 2023). However, trust in HAITs may evolve differently than in teams consisting exclusively of human members. Georganta and Ulfert (2024) proposed that cognitive and affective interpersonal trust is expected to be lower upon the introduction of a new AI teammate compared to the addition of a new human teammate. Existing team members establish trust based on prior experiences and their perceived resemblance to the new teammate, as suggested by social categorization theory (Grossman & Feitosa, 2018).
The trustworthiness of new team members, including AI, is shaped by impressions of ability, integrity, and benevolence (Georganta & Ulfert, 2024). These are the essential components of trust that are crucial in newly established team dynamics. The Technology Acceptance Model (TAM) posits that perceived usefulness (PU) and perceived ease of use (PEOU) are important determinants affecting users' decisions regarding new technologies (Inkpen et al., 2023). AI teammates are frequently regarded as less reliable than human counterparts due to ambiguity in assessing the AI’s competence, integrity, and other characteristics.
Insufficient prior history and experience with AI teammates may result in less trust and reduced transparency (Grossman & Feitosa, 2018). Research indicates that lack of prior engagement with autonomous technologies may lead to diminished interpersonal trust (Schaefer et al., 2016; Ulfert & Georganta, 2020). In the absence of adequate understanding of an AI teammate’s capabilities, current team members may regard the AI teammate as less reliable than a human counterpart. Georganta and Ulfert (2024) argue that the introduction of an AI teammate is likely to generate fewer trust behaviors among human team members due to the AI’s unfamiliarity.
Dennis et al. (2023) pointed out that human team members may pursue self-interested goals that conflict with organizational or team objectives, negatively impacting team performance. Conversely, AI teammates with their goals in alignment with the organization goals are less likely to be viewed as having conflicting interests. Concerns with information asymmetry emerge with AI-enabled devices, frequently perceived as “black boxes” that conceal the information they hold and their methods of utilization. This opacity can restrict human team members’ comprehension of AI activity and heighten privacy concerns (Dennis et al., 2023).
Zerilli et al. (2022) emphasized that trust in algorithmic systems is essential for the effective operation of HAITs. Concerns about the “black-box” aspect of AI continue, as information asymmetry can make it difficult for human teammates to fully understand AI behavior (Dennis et al., 2023). Moreover, trust is an essential element for effective team dynamics and performance inside HAITs (Textor et al., 2022), with empirical research supporting the significance of well-established trust in these settings (McNeese et al., 2021).
As AI teammates gain greater autonomy, their decisions may be evaluated on ethical grounds, similar to human collaborators (Textor et al., 2022). Organizations must acknowledge the biases that team members may possess towards AI teammates and create training resources to mitigate these biases (Dennis et al., 2023). In the creation of AI team members, it is essential to prioritize not just their capabilities but also the users’ perceptions of their benevolence and integrity, as these elements significantly impact trustworthiness and the willingness to interact with AI (Dennis et al., 2023).
Trust Violations and Repair Mechanisms in AI Teammates
Trust in machines is stated as “the attitude that an agent will assist in achieving an individual's objectives in a context marked by uncertainty and vulnerability” (Zerilli et al., 2022, p. 1). Any breach of trust in HAITs call for the restoration measures of trust to reinstate the team performance. Trust in AI systems is affected by ethical considerations, and unethical behavior can significantly damage that trust (Jones & Bowie, 1998).
Prior research has shown that individuals’ judgments of others' ethicality substantially affect trust, a correlation that applies to both human-human and human-AI collaborations. Users can lose faith in AI teammates more easily in comparison to their human counterparts for performing same type of mistakes (Dennis et al., 2023). Empirical evidence indicates that individuals exhibit more sensitivity to errors committed by AI teammates compared to those made by humans.
Trust violations by AI teammates can possess ethical implications, as the activities of the AI teammates may satisfy a current goal while doing it unethically (Flathmann et al., 2021). Yet there may be instances where an AI teammate unknowingly performs an action perceived as unethical, leading to a trust breach (Schelble et al., 2024). To preserve trust, an AI teammate must therefore refrain from any unethical conduct.
Effects of Trust Violations
Trust within human-AI partnerships is delicate and fluctuating, and trust violations are unavoidable due to the limitations in technology (de Visser et al., 2018). Trust violations can result in serious consequences, including impaired team performance, eroded confidence, and a decline in trust in HAITs (McNeese et al., 2021). Trust breakdowns in human-AI collaborations may also arise from ethical transgressions (Schelble et al., 2024).
Evidence shows that humans already assess the ethicality of AI teammates when determining whether to trust it (Schelble et al., 2024). Furthermore, an AI teammate engaging in an action perceived as unethical can dissolve the trust among team members (Flathmann et al., 2021). The decisions taken by the independently acting AI teammates impact the entire HAIT and hence, ethical considerations become further critical (Schelble et al., 2024).
Trust Repair Strategies
Strategies for repairing trust are essential following a trust breach, particularly if it involves ethics. When a human collaborator considers an AI teammate’s actions as unethical, it is a priority to restore trust to original levels pre-violation (Schelble et al., 2024). Research has revealed many trust restoration tactics, such as apologies and denials, with apologies proving more effective in competency-related violations (de Visser et al., 2018). However, the effectiveness of trust repair strategies depends on the type of violation (e.g., competency vs. integrity) and the context (e.g., high-risk situations; Textor et al., 2022).
Calibration of trust in HAITs can be complicated in ethically complex scenarios due to the lack of clarity in determining what action should be right. This complexity highlights the necessity of comprehending the impact of AI ethicality on trust (Textor et al., 2022). Although various models for trust restoration in human-machine teams have been suggested, further empirical research is required to investigate the correlation between trust and team performance (Textor et al., 2022).
Ethical trust violations may pose unique challenges, wherein traditional trust repair strategies like apologies and denials may not be effective (Jones & Bowie, 1998). Developing an ethical AI teammate is a primary goal in AI research, as unethical AI has already demonstrated negative outcomes, such as social inequities in recruitment. Textor et al. (2022) posited that trust is lowest when a system denies responsibility for an error, and highest when the system provides an apology for competency-related infractions. Thus, understanding how to repair trust in ethically charged situations is critical for the future of HAITs.
Trusting AI Teammates with Caution
The trust between human and AI teammates is intricately connected to team performance, and initiatives designed to bolster team trust can improve overall efficacy (Textor et al., 2022). Difficulties in calibrating trust towards an AI teammate is a significant challenge in human-AI collaboration, particularly in contexts of knowledge imbalance (Gomez et al., 2023). Ulfert et al. (2023) observed that factors such as distrust, excessive trust, or insufficient trust in HAITs can substantially deteriorate team efficiency.
Parasuraman and Manzey (2010) asserted that humans exhibit trust in AI teammates capable of providing decisions. This trust can sometimes be excessive than necessary resulting in a phenomenon called “automation-bias”. Due to automation-bias, human teammates can develop a tendency to over-rely on machines, becoming complacent, and accept AI teammate’s advice without any validation.
Bienefeld et al. (2023) emphasized the necessity of accurately calibrating trust in AI agents, especially in high-stakes environments where excessive reliance may result in safety hazards. Achieving an optimal equilibrium—avoiding excessive trust or insufficient confidence in the AI—is crucial for ensuring team safety and effectiveness. Nonetheless, trust calibration in machines cannot be examined in isolation, as human needs and behaviors exhibit considerable variability across diverse situations (Jain et al., 2022).
Moreover, Zerilli et al. (2022) proposed the concept of “algorithmic vigilance,” which involves a balance between skepticism and trust in AI teammates towards promoting optimal HAIT’s performance. They suggest a spectrum where complacency and opposition are at the extremes, with algorithmic vigilance ideally located right in the middle. The equilibrium in extent of trusting AI teammates represents a healthy user engagement (Zerilli et al., 2022).
Furthermore, Textor et al. (2022) asserted the importance of supervising AI teammates for guaranteeing the ethical compliance of AI teammates. They underscored the necessity of oversight, whether by human teammates or external organizational members, to uphold a collective ethical framework within teams. In the absence of this control, AI systems may diverge from established standards, potentially leading to ethical breaches.
The assessment of ethical conduct of an AI teammate faces multiple challenges (Textor et al., 2022). One such challenge is limitations in any system making it impossible to have a perfectly ethical and trustworthy system. Another challenge is the subjective nature of ethics, since something unethical to an individual could be perfectly ethical for another person.
DISCUSSION
Implications for Practice
Thomas et al. (2022) drew attention to the evolution of Human Resource Development (HRD) to include technical advancements in novel and interdisciplinary domains such as virtual HRD (VHRD), emphasizing the necessity to integrate novel methods to meet contemporary organizational requirements. Following these directions, this study discovered recent advancements in HAITs to enhance performance, driven by ethical and trustworthy AI teammates.
In fast-paced HAITs like virtual teams or crisis management, where team dynamics change swiftly, flexible AI teammates can offer the required agility to enhance human decision-making (Seeber et al., 2020). Human Resource Development professionals should promote AI teammates that possess adaptive learning capabilities, allowing them to evolve in tandem with human colleagues and more effectively respond to scenario variations. This adaptability will improve immediate team performance and guarantee the long-term incorporation of AI teammates into the organizational workflow (Stowers et al., 2021).
The incorporation of AI teammates into organizational frameworks has significant ramifications for Human Resource Development (HRD), especially in promoting trust, collaboration, and ethical conduct within HAITs. The practitioners should prioritize having an AI teammate which is transparent with factors such as- how it thinks, how it comes to a decision, and why it says something. Research highlights the vulnerability of trust in AI teammates, indicating that it can be quickly compromised when transparency is insufficient (McNeese et al., 2021).
Nonetheless, trust in AI teammates is not the same as trust in humans, as perceptions of AI teammates’ trustworthiness can vary greatly from those of human teammates (Georganta & Ulfert, 2024; O’Neill et al. 2022). Therefore, Chang and Ke (2023) asserted that HRD professionals are tasked with the essential duty of aligning individual and organizational ethical norms. Algorithms behind AI teammates should assist in advocating transparency towards fostering diversity, equity and inclusion practices in the industry.
Consequently, designers and users should collaborate in the development of AI teammates resulting in confidence building measures (Malik et al., 2022). Explainable AI is an essential instrument for reducing uncertainty and enhancing a sense of control, which is vital for effective human-AI collaboration (Chen et al., 2018). Human Resource Development professionals must emphasize the use of XAI technologies that improve transparency in showing how AI teammates think and decide.
Dennis et al. (2023) asserted that when AI teammates become increasingly incorporated into workplaces, humans who swiftly acclimate to collaborating with them will acquire a substantial advantage. Individuals who cultivate these talents may be more suitably equipped to assume leadership positions. Therefore, the responsibilities of HRD practitioners go beyond mere technical training, involving the cultivation of cognitive frameworks that empower employees to comprehend AI behavior in alignment with the goals of the HAITs.
Skills to manage AI teammates can be imparted to the HRD practitioners through initiatives such as team-based learning programs that prioritize collaborative decision-making and the interdependence of human and AI teammates (Textor et al., 2022). Ethical considerations in AI teammates design are especially pertinent for HRD, given the increasing significance of AI teammates in decision-making processes. The advancement in AI teammates should adhere to concepts of fairness, accountability, and transparency to guarantee conformity with organizational values and ethical norms (Jobin et al., 2019).
Parasuraman and Manzey (2010) cautioned about the possibility of humans either over-relying on AI teammates, assuming they are always correct, or under-relying on them, treating AI with undue skepticism due to lack of understanding. And both extremes negatively impact team performance. Thus, HRD interventions should concentrate on formulating trust calibration procedures that instruct employees on when to rely on AI teammates and when to critically evaluate their judgments. By enabling proper trust calibration, HRD practitioners can improve team cohesion and guarantee that AI coworkers are perceived as useful collaborators rather than as entities to be distrusted or excessively depended upon.
HRD experts must actively ensure that ethical norms are integrated into the technical facets of AI creation and implemented inside company policies and procedures. This entails ongoing surveillance of AI systems to identify possible biases or ethical violations, together with the establishment of responsive methods to address these issues promptly (Zhou et al., 2020). Ultimately, adaptability and flexibility in AI systems are crucial for enabling AI collaborators to respond to changing team configurations and fluid work environments.
Implications for Research
The increasing presence of AI teammates in organizational settings requires a thorough examination of its effects on HRD theory and practice. Although there is a growing corpus of literature on trust in AI systems, the precise mechanisms by which trust is established, sustained, and restored in human-AI partnerships remain inadequately examined (McNeese et al., 2021). Therefore, longitudinal studies are essential to examine the evolution of trust in AI teammates over time, especially in contexts where AI teammates assume progressively independent roles in decision-making.
Trust repair methods require additional empirical investigation. Research should concentrate on discovering efficient trust restoration solutions that may be utilized in human-AI teammates to reestablish team trust and performance (Schelble et al., 2024). This is important considering that errors and ethical violations by AI teammates are unavoidable due to technical limitations. Theoretical models of trust repair from all human teams must be adapted and applied to address the distinct dynamics of AI teammates in HAITs, including the influence of perceived infallibility and machine autonomy.
Additionally, the influence of AI teammates on traditional team dynamics, particularly in the context of HRD, represents a significant subject for exploration. Contemporary research mostly emphasizes the technical performance of AI systems; however, there exists an imperative to comprehend the impact of AI integration on human variables, including cooperation, collaboration, leadership, and learning. HRD scholars should explore how AI teammates impact psychological safety, knowledge sharing, and leadership emergence within teams, as these constructs are central to organizational performance (Dennis et al., 2023).
Furthermore, research should investigate the evolving functions of leadership in teams where AI teammates play a decision-making role. The interaction between human leaders and AI teammates, as well as the impact of AI teammates on leadership processes including coordination, delegation, and decision-making, are critical inquiries for the advancement of HRD theory. Research on AI teammate’s collaborative ways, autonomous nature, cognitive loads and shared mental models also has tremendous scope to add to the existing HRD knowledge.
HRD scholar-practitioners should examine the design and implementation of AI teammates to ensure they adhere to ethical standards while maintaining efficiency and productivity (Textor et al., 2022). The vital area for HRD research is the ethical ramifications of AI teammates in workplace decision-making processes. While ethical AI research has gained momentum, there remains a gap in understanding how AI-driven decisions align with or challenge organizational ethics, particularly in contexts involving employee performance evaluations, hiring decisions, and promotions.
Further research is needed to examine the impact of ethical violations by AI teammates on employee views of justice and confidence inside the firm, especially from the lens of bias and fairness (Schelble et al., 2024). The creation of theoretical frameworks that include AI into current HRD models of learning and development is also a research necessity. Although AI is praised for its ability to improve efficiency and decision-making, HRD scholars must rigorously assess its effects on employee learning processes and ongoing development.
LIMITATIONS AND RECOMMENDATIONS FOR FUTURE REVIEWS
Despite adhering to a stringent systematic review methodology, the current study presents a few significant limitations and recommendations that merit attention. Initially, although employing the keyword artificial intelligen* to access all research pertaining to this field, the search may have overlooked relevant studies with AI related terms such as machine learning (ML), natural language processing (NLP), robots etc. A thorough search was conducted, augmented by a manual backward and forward examination of citations and references. Nonetheless, there remains no assurance that all studies pertinent to the selected topic were incorporated, a common occurrence when conducting evaluations of expansive subjects such as this.
Since this study focused on the latest developments in the past decade, relevant studies prior to 2013 could have possibly been missed. To guarantee the quality of the examined studies, the review was restricted to peer-reviewed publications and omitted grey literature, such as book chapters and conference proceedings, which would have offered supplementary insights into the role of AI teammate in performance enhancement. This review article has exclusively incorporated papers published in English, perhaps overlooking significant studies in other languages within this domain.
Building on the present study's focus on AI teammate trustworthiness, future research could examine aspects like shared mental models, autonomy, and the impact on human teammates to further explore their influence on the performance of Human-AI Teams (HAITs). Torraco (2016) observed that literature reviews are essential for both new and mature topics. Therefore, as the field of HAITs progresses in HRD, an integrative literature review should be undertaken after the subject reaches greater maturity. A meta-analysis of the impacts of AI teammates on performance, could enhance the field by providing a comprehensive account of available empirical evidence.
An unsolved question exists: to what extent is trust in AI teammates affected by ethical violations? Automation bias, characterized by an excessive reliance on AI teammates, may further complicate this dynamic (Parasuraman & Manzey, 2010). Hence, future research needs to be focused on quantitative and experimental studies to measure the calibration of trust which can be imposed on an AI teammate. Textor et al. (2022) highlighted that the increasing deployment of autonomous systems has generated apprehensions regarding their ethical ramifications, calling for additional research on the impact of an AI teammate's ethical conduct on trust within HAITs.
CONCLUSION
The growing incorporation of Artificial Intelligence (AI) collaborators into Human-AI teams (HAITs) offers considerable opportunities as well as challenges in the future of work. As AI evolves to be more autonomous and collaborative, its contribution to improving team performance is receiving increased attention. However, trust remains a critical factor in determining the success of AI teammates within HAITs. This systematic literature review (SLR) explored how ethical and trustworthy AI teammates can enhance team performance, emphasizing critical elements such as explainable AI (XAI), ethics, and collaborative practices. This research also highlights the barriers in trust development in an AI teammate and ways to mitigate trust violations.
The review analyzes 37 peer-reviewed publications from databases like Scopus and Web of Science, employing the PRISMA methodology to guarantee thorough exploration of the subject. This review synthesizes results from across disciplines to explore the relationship between AI trustworthiness and team performance, identifying the obstacles and potential for enhancing collaboration between human and AI teammates. Trust is recognized as the core component of effective cooperation in HAITs, affecting the propensity of both human and AI collaborators to engage in teamwork.
Trust in AI is complex, as it contrasts with interpersonal trust in human teams. Trust among human team members is founded on social and emotional relationships, whereas trust in AI teammates depends on transparency, reliability, and compliance with ethical standards. Ethical AI teammates, constructed with considerations of fairness, privacy, and accountability, cultivate enhanced trust among human collaborators. Explainable AI is essential as it offers reasoning for AI teammates’ decision-making processes, enabling human collaborators to comprehend and trust AI judgments more effectively.
This study's first research question examined the impact of ethical and trustworthy AI teammates on enhancing team performance. The analysis indicated that ethical AI systems improve team performance by reducing ambiguity and promoting transparency in decision-making processes. Explainable Artificial Intelligence enhances AI teammate’s predictability and reliability for human collaborators. Moreover, ethical AI teammates are less likely to engage in biased or unfair decision-making, thereby supporting a cooperative and inclusive atmosphere in HAITs.
Collaboration between human and AI teammates is essential for enhancing performance. The review indicated that teams with ethical and reliable AI teammates generally exhibit increased cooperation and mutual understanding. Human collaborators are more inclined to depend on AI teammates when they regard it as ethical and trustworthy, resulting in enhanced decision-making and superior team performance. The results indicated that AI teammates can positively influence the behaviors of human colleagues, promoting a more unified and effective team dynamic.
The second research question examined the challenges that block trust in AI teammates and investigated strategies for reinstating trust following breaches. The review highlighted various obstacles to establishing and sustaining trust in AI collaborators, notably the “black box” characteristic of numerous AI systems, which complicates human comprehension of the decision-making processes of AI teammates. The absence of openness can engender mistrust, particularly in high-stakes contexts where trust is essential for success.
Trust breaches in AI teammates, including unethical conduct or decision-making errors, can profoundly affect team performance. The analysis underscored the significance of trusting AI teammates with caution in addressing these violations and reinstalling the trust. Mechanisms for trust repair, including apologies, explanations, and system updates, are examined as successful ways for restoring trust following violations.
In conclusion, this review addressed the research questions thereby offering a thorough examination of the impact of ethical and reliable AI teammates on team performance. The results highlight the significance of transparency, reliability, and ethics in establishing and sustaining trust between human and AI teammates in HAITs. As AI continues to evolve and integrate into diverse collaborative environments, organizations must prioritize the creation of ethical and trustworthy AI teammates to optimize team performance and uphold sustained cooperation.

Flow of Information Based on Prisma Statement for Reporting Systematic Reviews
Contributor Notes
SANKET RAMCHANDRA PATOLE is a HR systems and data professional and a PhD Human Resource Development candidate at The University of Texas at Tyler, United States. Sanket is a member of the research team at the WAVE (Workplaces and Virtual Environments) lab at Michigan State University and his research interests are technology, teams, ethics, and Human Resources. Sanket has pursued a Bachelor of Engineering and Master of Management Studies from the University of Mumbai followed by MS in Business Analytics from The University of Texas at Dallas. He expresses sincere gratitude towards his mentors for their inspiration during the rigorous research endeavor that culminated in this publication. Dr. John R. Turner provided insightful guidance, while Dr. Rose Baker, Dr. Rochell R. McWhorter, and Dr. Robert E. Carpenter endorsed his concepts for the exploration of AI in teams science. Email: sanket.ramchandra.patole@gmail.com


