Polarity Management in AI Integration: A Human-Centered Systems Perspective
AI is reshaping how organizations operate—but the real challenge isn’t technological, it’s human. This article explores how polarity management can help leaders balance efficiency, connection, and meaning in an AI-enabled world.
The rapid integration of AI into both organizational and personal contexts has introduced a new set of opportunities and challenges. While much of the current discourse focuses on technical capabilities and productivity gains, there is growing recognition that AI adoption is not solely a technological shift, but a human and relational transformation.
At HumLab, a human-centered consulting practice grounded in systems thinking and process consultation, we approach AI integration through the lens of how people experience, interpret, and adapt to change. From this perspective, the integration of AI is not simply about implementation, it is about how organizations navigate the tensions that emerge when new technologies reshape how humans think, relate, and work.
A recent large-scale qualitative study conducted by Anthropic, analyzing over 81,000 user interactions, provides a unique window into how individuals are engaging with AI systems in practice. The findings reveal not only the perceived benefits of AI, such as increased efficiency, accessibility, and support, but also emerging concerns related to human connection, autonomy, and cognitive reliance.
This article argues that many of the tensions emerging from AI integration are best understood not as problems to be solved, but as polarities to be managed. Drawing on the work of Barry Johnson (1992), polarity management offers a framework for navigating persistent, interdependent tensions, such as efficiency and human connection, without reducing them to binary choices.
AI Adoption as a Set of Interdependent Tensions
The Anthropic study highlights that individuals often turn to AI to fulfill fundamental human needs:
“People’s positive visions for AI seemed mostly to stem from a few basic desires: more time, more autonomy, more personal connection.”
These findings suggest that AI is not merely a tool for optimization, but also a medium through which individuals seek relief from temporal, cognitive, and relational constraints.
However, the same features that make AI valuable also introduce potential risks:
“These same qualities that make AI a patient tutor or tireless colleague also make it a place people go when human connection is unavailable or feels too uncomfortable.”
This duality is further illustrated in user narratives. In one instance, a user described relying on AI for emotional support following the loss of a parent, emphasizing its “unlimited patience.” In another, a user reflected on the unintended consequences of this reliance:
“I talked more with you than with a friend… But it was a stupid choice—I should have talked with that friend.”
These examples point to a critical insight: AI can both support and displace human connection. Similarly, the study identifies broader structural concerns, including job displacement and loss of autonomy:
“Concerns about jobs and the economy (22%) and about maintaining human autonomy and agency (22%) were similarly common.”
Taken together, these findings reveal that AI integration surfaces a series of persistent tensions between efficiency and meaning, accessibility and dependency, automation and human agency.
From Problems to Polarities
Traditional approaches to organizational change often frame challenges as problems to be solved. Within this logic, organizations may ask:
Should we prioritize efficiency or employee wellbeing?
Should we automate processes or preserve human judgment?
However, such questions assume that one pole can be selected over the other. Polarity management challenges this assumption.
According to Johnson (1992), polarities are interdependent pairs of values or perspectives that cannot be resolved in favor of one side without generating negative consequences. Instead, they must be actively managed over time to leverage the benefits of both poles while minimizing their downsides.
From this perspective, the central question of AI integration shifts from:
“Which should we prioritize?”
to:
“How can we intentionally leverage both, without triggering the downsides of either?”
Core Polarities in AI Integration
1. Efficiency and Human Connection
AI enables unprecedented levels of efficiency, reducing time spent on repetitive tasks and increasing access to information. However, over-reliance on efficiency can lead to the erosion of relational depth and meaning.
Conversely, prioritizing human connection fosters trust, empathy, and engagement, but may be perceived as less scalable or efficient.
The Anthropic findings illustrate this polarity clearly: AI enhances accessibility and support, yet may simultaneously reduce engagement in human relationships.
2. Automation and Human Judgment
AI systems excel at pattern recognition, data processing, and consistency. These capabilities can enhance decision-making processes and reduce cognitive load.
However, over-reliance on automation raises concerns about diminished critical thinking, loss of expertise, and ethical blind spots.
Human judgment, while inherently imperfect and subject to bias, provides contextual awareness, moral reasoning, and the capacity to navigate ambiguity.
3. Accessibility and Dependency
AI increases access to knowledge, support, and guidance, often at low or no cost. This is particularly significant for individuals who may lack access to traditional forms of support.
At the same time, increased accessibility can lead to dependency, where individuals substitute AI interactions for more effortful, and often more meaningful, human engagement.
4. Innovation and Stability
Organizations adopting AI are often driven by the need to innovate and remain competitive. However, continuous innovation can create instability, change fatigue, and fragmentation.
Stability, on the other hand, provides structure, clarity, and reliability, but may inhibit adaptation and responsiveness.
Implications for Organizations and Leaders
Recognizing AI integration as a set of polarities has several implications for practice.
First, it shifts the role of leadership from decision-making to tension management. Leaders are not tasked with choosing between efficiency and human connection, but with designing systems that enable both.
Second, it emphasizes the importance of early warning signals. For example:
Increased efficiency accompanied by declining engagement may indicate overemphasis on automation
High reliance on AI for interpersonal or emotional support may signal a weakening of relational networks
Third, it reinforces the need to preserve and cultivate uniquely human capacities, including:
Empathy
Ethical reasoning
Relational intelligence
Comfort with discomfort
As the Anthropic study suggests, AI may reduce the friction associated with seeking support. However, it is often through this friction—through vulnerability and discomfort—that meaningful growth occurs.
Conclusion
AI integration is often framed as a technical or operational challenge. However, the evidence suggests that it is equally, if not primarily, a human systems challenge.
The tensions emerging from AI adoption, between efficiency and connection, automation and judgment, accessibility and dependency, are not problems to be solved, but polarities to be managed.
Polarity management provides a valuable framework for navigating these tensions in a way that avoids false trade-offs and supports more sustainable, human-centered outcomes.
As organizations continue to integrate AI into their operations, the question is not whether to embrace technology or preserve human values. Rather, it is how to intentionally hold both—leveraging the strengths of AI while safeguarding the relational and cognitive capacities that define human experience.
Explore how HumLab supports AI integration in organizations.
References
Barry Johnson (1992). Polarity Management: Identifying and Managing Unsolvable Problems. HRD Press.
Anthropic (2024). 81,000 Conversations: What People Want from AI. Retrieved fromhttps://www.anthropic.com/features/81k-interviews
Wendy K Smith, & Marianne W Lewis (2011). Toward a Theory of Paradox: A Dynamic Equilibrium Model of Organizing. Academy of Management Review, 36(2), 381–403.
AI Adoption and Psychosocial Risk in Quebec Workplaces: The Policy Gap Under Law 27
Introduction
Under Quebec’s current occupational health and safety regime, employers are required to identify, analyze, and prevent psychosocial risks at work. The Act respecting occupational health and safety states that prevention programs and action plans must eliminate risks to workers’ physical and mental well-being at the source. The Commission des normes, de l’équité, de la santé et de la sécurité du travail (CNESST) and the Institut national de santé publique du Québec (INSPQ) now clearly frame psychosocial risks as workplace hazards that must be identified and managed (Act respecting occupational health and safety, CQLR c S-2.1; CNESST, n.d.-b; INSPQ, n.d.).
At the same time, Quebec workplaces are rapidly adopting artificial intelligence. AI is now embedded in scheduling, screening and hiring, performance analytics, productivity monitoring, generative copilots, and decision-support tools. These systems do not just change efficiency. They can also change workload, autonomy, fairness, communication, and the lived experience of work itself, which is precisely the terrain of psychosocial risk (CNESST, n.d.-a).
This is the gap. Quebec’s existing framework for psychosocial risk is broad enough to cover many of the harms that AI can create or intensify, but the policy guidance has not yet caught up to AI as a distinct source of those harms. The problem is not that Law 27 failed. The problem is that employers can comply on paper while missing one of the fastest-moving changes in the conditions of work (Act respecting occupational health and safety, CQLR c S-2.1; Loi modernisant le régime de santé et de sécurité du travail, 2021).
A framework that can see the problem, but does not yet name it
INSPQ defines psychosocial risks as factors tied to work organization, management practices, employment conditions, and social relations that increase the probability of adverse physical and psychological health effects (INSPQ, n.d.; Vézina et al., 2011). CNESST’s guidance identifies ten core psychosocial risk factors that employers are expected to assess and address: psychological demands and workload, decision latitude and autonomy, recognition, social support, organizational justice, job insecurity, information and communication, psychological harassment, workplace violence, and work-life balance (CNESST, n.d.-a).
CNESST also emphasizes that these factors should be considered together, rather than one by one, because their interaction matters. High demands combined with low autonomy, for example, are more harmful than either factor alone (CNESST, n.d.-a; Vézina et al., 2011).
Quebec therefore already has a strong conceptual map. What it does not yet have is a clear AI layer on top of that map. That matters because AI often enters organizations looking like a technology decision, not a work-design decision. A new tool is purchased, a pilot is launched, a team is expected to adapt. But in psychosocial terms, the real question is not whether the tool is impressive. It is whether it changes demands, control, support, fairness, recognition, and security in ways that could harm workers.
What follows is an analysis of how AI adoption intersects with the factors the CNESST framework already requires employers to assess.
Workload and psychological demands
One of the clearest pathways is workload. AI is often introduced with the promise of reducing effort, but in practice it frequently raises expectations. A customer service team with AI drafting tools may be expected to handle more tickets. A professional using a generative copilot may be expected to produce more output in the same time. And even when manual effort drops, cognitive work can rise. People still have to review outputs, catch errors, explain inconsistencies, and carry responsibility when the system is wrong (CNESST, n.d.-a).
The evidence here is still emerging, but it is already enough to justify caution. A longitudinal study of workers in Germany found no sizeable overall negative effect of occupational AI exposure on well-being or mental health in its main measure, but it did find small negative effects on life and job satisfaction when using a more granular self-reported AI exposure measure (Giuntella et al., 2025). This is not evidence of broad harm, but it does suggest that the lived experience of direct AI use may matter more than high-level exposure categories indicate.
Decision latitude and autonomy
Decision latitude is one of the classic pillars of psychosocial risk research. When people have little control over how and when they do their work, stress risk rises (Karasek, 1979). AI can quietly erode that control. Scheduling systems can optimize shifts with little worker input. Task-allocation systems can dictate priorities. Decision-support tools can produce recommendations that workers are expected, implicitly or explicitly, to follow. A tool presented as assistance can operate, in practice, as a constraint (CNESST, n.d.-a).
This is one of the places where current guidance feels incomplete. CNESST’s materials on decision latitude are useful, but they were not written with algorithmic management and generative systems at the center of the picture. That leaves employers with a familiar checklist and a new kind of intervention that does not sit neatly on it.
Recognition and social support
Recognition protects against burnout, disengagement, and the feeling that one’s effort has become invisible (Siegrist, 1996). AI complicates recognition in subtle ways. When outputs are produced with a generative tool, attribution becomes blurry. The thinking, editing, judgment, and verification done by the worker can disappear behind the apparent speed of the machine. Organizations may then raise expectations without raising recognition, pay, or support. The result is a new version of an old problem: effort-reward imbalance, now dressed in the language of innovation (CNESST, n.d.-a).
Social support can also thin out at exactly the moment it is needed most. AI systems can replace human touchpoints with chatbots, dashboards, automated prompts, and algorithm-mediated feedback. Workers who are struggling with a tool may hesitate to ask for help, especially in cultures where AI fluency is treated as the new baseline. A workplace can end up with more technical mediation and less actual support, which is the opposite of what the existing psychosocial framework identifies as protective (CNESST, n.d.-a; Vézina et al., 2011).
Organizational justice and communication
Organizational justice is about whether decisions feel fair, processes feel legitimate, and people feel they are treated with respect (Colquitt, 2001). AI raises justice issues almost by default. Hiring tools, performance scoring systems, and productivity analytics can rely on criteria that workers do not understand, cannot challenge, and do not experience as fully reflective of their contribution. Even when a system is statistically defensible, opacity itself can erode trust (CNESST, n.d.-a).
Communication is equally important. AI adoption is organizational change, but many organizations still communicate it as a software rollout rather than a redesign of work. Workers are told what the tool does, but not what it may change in accountability, pace, judgment, or monitoring. When the human implications of AI are underexplained, uncertainty fills the gap, and uncertainty is itself part of psychosocial risk (CNESST, n.d.-a; Vézina et al., 2011).
Job insecurity, harassment, and work-life boundaries
AI also interacts with the remaining CNESST risk factors through more diffuse but still important pathways. Job insecurity can rise long before any role disappears, simply because workers begin to wonder whether their work is becoming more replaceable, more measurable, or more easily reorganized around the technology. The uncertainty itself can be harmful. Kim and Lee (2024) found, in a three-wave study of South Korean professionals, that organizational AI adoption increased job stress and, through job stress, burnout, while self-efficacy in AI learning helped buffer that effect.
AI-enabled monitoring can also create climates that feel coercive, contributing to conditions in which harassment dynamics intensify. Always-on tools can lengthen the workday without anyone formally extending it. Content moderation and review work can expose people to concentrated harmful material. These are not all new hazards, but AI can intensify them, scale them, or make them harder to notice early.
Worker consultation: a procedural gap
Law 27 does not only require employers to identify psychosocial risks. It also strengthens prevention and participation mechanisms within Quebec’s health and safety regime, including mechanisms for worker involvement in prevention planning and workplace health and safety processes (Act respecting occupational health and safety, CQLR c S-2.1; Loi modernisant le régime de santé et de sécurité du travail, 2021).
AI adoption almost never includes workers in the decision. Technology is selected, piloted, and deployed through IT procurement, executive strategy, or operational efficiency workstreams, none of which typically involve the health and safety committee or the workers whose conditions of work will change. This is not just a psychosocial risk issue. It is also a procedural gap. If AI adoption changes the psychosocial conditions of work, and workers are not consulted in the process, the employer may already be falling short of the participatory spirit of the current regime.
That creates a practical compliance risk. An employer can have a prevention program, a psychosocial risk assessment, and an active AI rollout underway, yet still fail to connect them.
The real policy gap
The policy gap in Quebec is not the absence of a psychosocial framework. It is the absence of explicit guidance connecting that framework to AI adoption. Employers can run a psychosocial risk assessment and still fail to ask whether a new scheduling algorithm is reducing autonomy, whether a performance dashboard is undermining justice, or whether a generative copilot is intensifying workload through higher output expectations. CNESST guidance currently provides the categories. It does not yet give organizations enough help in seeing how AI fits inside them (CNESST, n.d.-a; CNESST, n.d.-b).
In my work supporting psychosocial risk management across multinational employers, I observe this pattern consistently. Organizations invest in AI deployment and psychosocial risk compliance as parallel workstreams, with no integration between them. The AI team does not consult the OHS team. The psychosocial risk assessment does not mention AI. The prevention program does not address the psychosocial consequences of the technology the organization is simultaneously introducing.
That gap matters because the two workstreams are not parallel at all. They are deeply intertwined. AI adoption is reshaping the very conditions that psychosocial risk assessments are supposed to measure.
Conclusion
Quebec does not need to start from scratch. Law 27 and the broader occupational health and safety regime already provide a strong foundation for psychosocial risk prevention.
The next step is more precise policy guidance: when AI deployment should trigger psychosocial risk reassessment, how AI-related psychosocial hazards should be identified within the existing INSPQ/CNESST factor model, what worker consultation should look like during AI implementation, and how employers should document the preventive measures they put in place (Act respecting occupational health and safety, CQLR c S-2.1; CNESST, n.d.-a).
Whether the institutions responsible for healthy work are ready to see AI clearly is the challenge now.
References
Act respecting occupational health and safety, CQLR c S-2.1. https://www.legisquebec.gouv.qc.ca/en/document/cs/s-2.1
Colquitt, J. A. (2001). On the dimensionality of organizational justice: A construct validation of a measure. Journal of Applied Psychology, 86(3), 386–400. https://doi.org/10.1037/0021-9010.86.3.386
Commission des normes, de l’équité, de la santé et de la sécurité du travail. (n.d.-a). Facteurs de risques psychosociaux liés au travail. Retrieved April 15, 2026, from https://www.cnesst.gouv.qc.ca/fr/prevention-securite/sante-psychologique/facteurs-risques-psychosociaux-lies-au-travail
Commission des normes, de l’équité, de la santé et de la sécurité du travail. (n.d.-b). Risques psychosociaux liés au travail. Retrieved April 15, 2026, from https://www.cnesst.gouv.qc.ca/fr/prevention-securite/sante-psychologique/risques-psychosociaux-lies-au-travail
Giuntella, O., König, J., & Stella, L. (2025). Artificial intelligence and the wellbeing of workers. Scientific Reports, 15(1), 20087. https://doi.org/10.1038/s41598-025-98241-3
Institut national de santé publique du Québec. (n.d.). Risques psychosociaux du travail et promotion de la santé des travailleurs et travailleuses. Retrieved April 15, 2026, from https://www.inspq.qc.ca/risques-psychosociaux-du-travail-et-promotion-de-la-sante-des-travailleurs
Karasek, R. A. (1979). Job demands, job decision latitude, and mental strain: Implications for job redesign. Administrative Science Quarterly, 24(2), 285–308. https://doi.org/10.2307/2392498
Kim, B.-J., & Lee, J. (2024). The mental health implications of artificial intelligence adoption: The crucial role of self-efficacy. Humanities and Social Sciences Communications, 11, 1561. https://doi.org/10.1057/s41599-024-04018-w
Loi modernisant le régime de santé et de sécurité du travail, LQ 2021, c 27.
Siegrist, J. (1996). Adverse health effects of high-effort/low-reward conditions. Journal of Occupational Health Psychology, 1(1), 27–41. https://doi.org/10.1037/1076-8998.1.1.27
Vézina, M., Cloutier, E., Stock, S., Lippel, K., Fortin, É., Delisle, A., St-Vincent, M., Funes, A., Duguay, P., Vézina, S., & Prud’homme, P. (2011). Enquête québécoise sur des conditions de travail, d’emploi et de santé et de sécurité du travail (EQCOTESST). Institut de recherche Robert-Sauvé en santé et en sécurité du travail. https://www.irsst.qc.ca/publications-et-outils/publication/i/100584