Trust Ecology Framework · Hor, 2026

Where are you
coming from?

We’ll take you straight to the right place.

Appearance
Trust Ecology Framework · Interactive Sandbox

Trust does not break bilaterally.
It fractures ecologically.

"Trust in AI-augmented decisions often breaks down not because the technology fails or people are resistant, but because we have been applying dyadic trust models to a triadic, interconnected reality."

74%
Studies remain dyadic
66
Studies reviewed
31,198
Participants
r=.19
Transparency effect

Dyadic diagnostics in a triadic world

Trust in AI-augmented decisions involves three interdependent dimensions — and the feedback loops between them. Existing tools measure one or two dimensions in isolation. None model their cross-dimensional coupling.

74%
of studies are dyadic
Trust modelled as bilateral — AI system or human agent, never the triadic ecology
0%
model triadic interdependence
Across 66 studies and 31,198 participants, none operationalise cross-dimensional feedback loops
r=.19
avg transparency effect
Explicability interventions show modest, inconsistent effects because the legitimacy pathway is missing

No existing tool accounts for all three dimensions simultaneously — or for the Interconnected Trust feedback loops that make a legitimacy fracture cascade retroactively into both AI credibility and human steward integrity. These cross-dimensional effects are operationalised in the TTS through exploratory items (20–25) that test whether the triad holds together empirically.


Six ways to explore the framework

Pick the one that fits where you are right now. Each path is self-contained.

Campbell Collaboration Review 25-Item Trust Triad Scale Sequential Explanatory Mixed-Methods CFA + CB-SEM REB Approved · #026-005

Three interdependent dimensions
and the ecology they form

Most frameworks treat trust in AI as a bilateral question — between a person and a system. The TEF begins from a different premise: that trust in AI-augmented decisions is always triadic, always processual, and always ecological.

I · AI Explicability II · Human Stewardship III · Systemic Legitimacy + Interconnected Trust
Dimension I · Cognitive

AI Explicability (E)

Cognitive trust in AI reliability, transparency, and understandability. When explicability is low, professionals cannot interrogate AI reasoning — they oscillate between over-trust and under-trust, regardless of accuracy.

  • Accuracy and reliability of predictions
  • Transparency of model logic and reasoning
  • Black-box mitigation through visualisations
  • Confidence communication in uncertain scenarios
  • Reduction of competence doubt through explanation
Lee & See (2004) · Hoff & Bashir (2015) · 6 TTS items
Dimension II · Affective/Cognitive

Human Stewardship (S)

Relational trust in the integrity, benevolence, and competence of human agents who interpret, oversee, or override AI outputs. Even explicable AI becomes suspect when stewardship accountability falters.

  • Integrity in interpreting AI outputs
  • Stakeholder well-being prioritisation
  • Expertise in AI-augmented contexts
  • Benevolence buffering AI-related uncertainty
  • Empathetic communication of AI decisions
  • Integrity signals strengthening process trust
Mayer et al. (1995) · Mayer & Davis (1999) · 7 TTS items
Dimension III · Justice

Systemic Legitimacy (L)

Perceived fairness and institutional acceptability of AI-human decision outcomes. Functions as both product of trust and retroactive signal — unfair outcomes trigger attributional reappraisal across all prior interactions.

  • Equitable distribution of benefits and burdens
  • Procedural fairness in decision processes
  • Alignment with normative justice expectations
  • Respect in treatment of affected parties
  • Recourse mechanisms and appeal processes
Colquitt (2001) · Tyler (2006) · Suchman (1995) · 6 TTS items

The ecology they form

Three dimensions is not enough.
The joints between them are the framework.

The TEF integrates these three dimensions not as a checklist but as a living ecology — where fractures or repairs in one dimension cascade to the others. This is what 74% of existing frameworks miss entirely.

Ecological Connector

Interconnected Trust — the joints of the tripod

Interconnected Trust is not a fourth primary dimension of the TEF. It is the dynamic, processual property that emerges when the three dimensions function as an ecology — where fractures or repairs in one dimension cascade to others, creating feedback loops within the socio-algorithmic system (Möllering, 2001; Gillespie & Dietz, 2009).

Analogous to a tripod: the three legs are AI Explicability, Human Stewardship, and Systemic Legitimacy. Interconnected Trust is what the joints provide — without them, three strong legs still collapse.

Trust emerges as a system-wide dynamic from their ongoing feedback loops — not as the sum of independent parts. Low Legitimacy (unfair outcomes) can retroactively erode trust in both AI and human stewards. Strong stewardship can buffer unclear AI explanations.

The model embeds the triad in moderating concentric rings — organisational, sectoral, regulatory — and a processual spiral of iterative trust calibration and repair (Rousseau et al., 1998).

The core mechanism

When legitimacy fractures,
it reaches backward in time.

This is the TEF’s most non-obvious finding. A legitimacy fracture doesn’t just affect future trust — it causes professionals to retroactively revise their attributions of all prior interactions. Trust that felt solid becomes suspect.

The primary causal mechanism is attributional reappraisal (Gillespie & Dietz, 2009): when professionals perceive unfair outcomes (low Legitimacy), they retroactively revise attributions — eroding Explicability trust and Stewardship integrity, irrespective of prior performance evidence.

Transparency interventions addressing only Explicability show modest average effects (r = .19) because they cannot intercept this legitimacy-driven cascade.

Cross-Sectional · Items 20–25
Exploratory interdependence items — test whether trust functions systemically at a point in time. Six items produce cross-loading evidence for the TEF’s core triadic claim.
Temporal · Supplemental Module · 7 items
Retrospective Critical Incident Module — captures how trust eroded and recovered over time after a specific AI decision failure (Flanagan, 1954; Butterfield et al., 2005).
“In socio-algorithmic contexts, failures cascade: AI errors erode confidence in human oversight, unfair outcomes retroactively corrode both AI and steward credibility.” — Hor (2026) · TEF Chapter 2, drawing on Luhmann (1979) and Rousseau et al. (1998)

The evidence

Why dyadic tools are not enough

The meta-analytic record confirms what the TEF predicts: transparency interventions show modest, inconsistent effects because the legitimacy and stewardship pathways are absent from existing models.

Domain Dominant (Dyadic) Finding Emergent Triadic Insight Effect
AI / Automation Trust Transparency gains modest; resistance persists despite intervention Outcomes moderate trust paths; feedback loops ignored r ≈ 0.19
Human / Org. Trust Benevolence buffers AI uncertainty Stewardship compensates for explicability gaps β ≈ 0.22
Justice / Outcomes Justice predicts acceptance (~35%) Inequity drives rejection; outcomes passive in dyadic models 20–40%
Transparency Effects inconsistent across contexts; visual > text Fails without justice/recourse; high variability unexplained β ≈ 0.14

Rate your context.
Reveal your Trust Fracture Profile.

What it measures
Your trust ecology across AI Explicability, Human Stewardship, and Systemic Legitimacy — plus cross-dimensional coupling.
How long
3–5 minutes. Rate each item as it applies to a specific AI-augmented decision in your professional context.
What you get
A live radar chart, your Trust Fracture Profile, and a downloadable branded PDF — ready to share or archive.
Want to go deeper?
After your profile, explore how this trust ecology developed — tracing erosion, attribution, and repair through a specific past incident.
Step 4 · Optional · Critical Incident

Think of a specific AI-augmented decision from your professional context. Rate each item 1–7 as you read. Your scores build a live Trust Fracture Profile across all three primary dimensions.

Items rated 0 / 25
1 — Strongly Disagree 4 — Neutral 7 — Strongly Agree

Choose a context.
Calibrate the ecology.

Select a real-world scenario drawn from the TEF's target domains. Adjust the three manager-influenceable levers — each grounded in the TTS dimensions — and receive a live triadic diagnosis.

Calibrate levers
AI ExplicabilityReliability · Transparency · Explanation quality
35
Human StewardshipIntegrity · Benevolence · Oversight accountability
52
Systemic LegitimacyProcedural fairness · Distributive equity · Recourse
22
● Diagnosing ecology…
Adjust levers to generate diagnosis
The three levers map directly to the TTS dimensions. As you calibrate, the fracture pattern and recommended intervention update in real time.
AI Explicability
35
Human Stewardship
52
Systemic Legitimacy
22
Ecological Coupling
Interconnected Trust · Ecological Coupling Signal
Derived from pillar balance — not a scored lever. High variance between E/S/L suppresses cross-dimensional coupling.
► Primary Intervention (TEF)
Set levers to generate intervention guidance.
Does this diagnosis reflect your experience or understanding of this scenario?
✓ Thank you — your feedback informs the research.
Export this diagnosis as a formatted report

Describe your trust challenge.

Apply the Trust Ecology Framework to your real-world context — one-shot structured diagnosis or an iterative dialogue that lets you explore cascade effects and intervention sequencing.

I · AI Explicability
Cognitive trust in AI reliability, transparency, and understandability. Can professionals interrogate the reasoning?
II · Human Stewardship
Relational trust in oversight integrity and benevolence. Are human agents accountable and trustworthy?
III · Systemic Legitimacy
Perceived outcome fairness — both product of trust and retroactive signal reshaping it across all nodes.

Upload or paste an organisational document — policy letter, claim decision, HR communication. DocuPRO reads the trust signals embedded in the language and maps them across all three TEF dimensions.

📄
Drop PDF, DOCX, or TXT — or click to browse
Coming in next release
⚖️ Regulatory Flag Layer
Compliance signal detection by jurisdiction — MAS, FCA, APRA. Flags language worth reviewing with legal counsel.
📊 Benchmark Comparison
How this document scores against industry baseline — contextualising fracture risk against sector norms.
🗂️ Relationship Portfolio
Multi-document trust trajectory over time — map the full arc of a customer or employee relationship.
📝 Remediation Script Library
Saved and exportable communication templates — industry-calibrated repair scripts drawn from Rousseau Ch.6.
TEF DocuPRO · Hor (2026) · Sobey School of Business, Saint Mary's University

How the Trust Ecology Framework was built

For researchers — this section covers the empirical design in detail

A sequential explanatory mixed-methods programme — four phases, each informing the next. Grounded in processual realism and critical pragmatism, the design treats trust as emergent, relational, and never static.

0

Campbell Collaboration Protocol

  • 66 empirical studies · N = 31,198 · healthcare, finance, hiring, criminal justice
  • 74% remain dyadic — AI or human trust, not both
  • 0% model triadic interdependence empirically
  • Transparency effect sizes inconsistent (r = .14–.19); legitimacy pathway largely absent
Protocol registered · Campbell Collaboration · Under review
1

Trust Triad Scale (TTS) · 25 Items

  • Expert Delphi panel · 10–15 members · AI trust, OB, applied ethics
  • Cognitive interviews · 10–15 participants · 40–45 min · S-CVI/Ave ≥ 0.90
  • 6 items: AI Explicability · 7 items: Human Stewardship · 6 items: Systemic Legitimacy
  • 6 items: Interconnected Trust (exploratory · items 20–25)
~30 items pre-pilot · target 20–25 post-CFA · 7-point Likert
2

Two-Wave Survey · CFA & SEM

  • N ≈ 450 at Wave 1 · ≈315–340 complete cases at Wave 2 (70–75% retention)
  • Wave 1: TTS, dyadic benchmarks, vignette, CIT recall · Wave 2: 4–6 weeks later
  • CFA → CB-SEM via lavaan · >90% power · small-to-medium effects
  • Lagged design yields temporal precedence evidence for trust dynamics
Qualtrics · online panel · REB #026-005
3

Semi-Structured Interviews

  • N = 20–25 professionals · 40–45 min · Zoom recorded with consent
  • Trust fracture narratives · attributional accounts · retrospective CIT
  • Reflexive Thematic Analysis · Braun & Clarke (2022)
  • Joint displays integrate with Phase 2 quantitative patterns
Healthcare · Finance · Insurance · Hiring · Criminal Justice

Four theoretical lineages, one framework

I · AI Explicability
Lee & See (2004)
Automation trust calibration — reliability, transparency, and performance-based cognitive trust
TTS items 1–6
Hoff & Bashir (2015)
Three-layer trust model — dispositional, situational, and learned trust in automation contexts
Dynamic Explicability scoring
II · Human Stewardship
Mayer et al. (1995)
Ability, benevolence, integrity as the triadic foundations of interpersonal organisational trust
TTS items 7–13
Mayer & Davis (1999)
Benevolence as buffer under uncertainty — stewardship scoring under AI override conditions
Trust measurement validation
III · Systemic Legitimacy
Colquitt (2001)
Four-factor organisational justice — distributive, procedural, interpersonal, informational
TTS items 14–19
Tyler (2006)
Procedural justice as institutional legitimacy driver — legitimacy as systemic, not individual
Voluntary compliance mechanism
Core Mechanism
Gillespie & Dietz (2009)
Attributional reappraisal after trust violation — retroactive fracture cascades across all three nodes
Ontological Foundation
Möllering (2001, 2006)
Processual trust as suspension and becoming — trust is never static, always in construction
Scope & Synthesis
Rousseau et al. (1998)
Interdisciplinary synthesis — justifies multi-domain applicability across all TEF contexts

Four questions no dyadic tool can answer

RQ1 · Systematic Review

To what extent do existing empirical studies operationalise trust in AI-augmented decisions as triadic and interdependent — rather than dyadic?

RQ2 · Scale Development

Does the TTS demonstrate adequate reliability, validity, and construct coverage across the three TEF dimensions and the Interconnected Trust dimension?

RQ3 · SEM Study

How do the three primary dimensions interact to predict trust outcomes — and does Interconnected Trust mediate cross-dimensional feedback loops?

RQ4 · Qualitative Study

How do professionals narrate trust fracture and repair — and do these accounts reflect the attributional reappraisal mechanism predicted by the TEF?


Delimitations

  • High-stakes AI-augmented decisions only — not routine automation
  • Professional decision-makers as primary unit of analysis
  • Two-wave design — longer longitudinal dynamics are future work
  • English-language systematic review — multilingual evidence a limitation
  • Four domains: healthcare · finance/insurance · hiring · criminal justice

Ethics & Positionality

  • REB Approved · Saint Mary's University · Protocol #026-005
  • Supervisory committee: Zhang · Wang · Carroll · Rousseau (external)
  • Practitioner-scholar positionality declared · 20+ years financial services
  • Data: anonymised · aggregate reporting only · no individual identification
  • Sandbox: research instrument only — not clinical or actuarial
The Trust Ecology Framework emerged not from a single insight, but from the convergence of methodological rigour, theoretical depth, and practical grounding — shaped through an iterative process of development, supervisory guidance, and scholarly dialogue.

This section introduces the research architecture that will test the framework, the supervisory team guiding its development, and the doctoral candidate who brought these diverse streams into a unified, testable, and accessible toolkit.
Co-Supervisor
Dr. Michael Zhang
Associate Professor, Department of Finance, Information Systems and Management Science
Sobey School of Business, Saint Mary's University
Director, Master of Business Analytics Program
PhD, Ivey Business School, Western University
Research in data analytics, healthcare services, and supply chain management. Leads major NFRFE and SSHRC grants including a $273,000 project on machine learning for youth mental health.
Michael Zhang's research centres on data analytics in healthcare services and supply chain management. He leads major grants including a $273,000 NFRFE project on machine learning for youth mental health and a $46,000 SSHRC IDG on vaccination strategies using analytics. He teaches applied data analysis, statistics, operations management, and quantitative methods across PhD, master's, and undergraduate programs.

TEF Contribution — His expertise in analytics and organizational decision-making provided foundational scaffolding for the TEF's empirical backbone, particularly in positioning the framework to speak to both academic rigour and practitioner relevance in high-stakes AI-augmented contexts.

Co-Supervisor
Dr. Yinglei Wang
Professor of Management Information Systems
F.C. Manning School of Business, Acadia University
PhD, Ivey Business School, Western University
Behavioural research on technology adoption in organisations. Publications in MIS Quarterly, JMIS, and ISJ. Harrison McCain Emerging Scholar Award (2011).
Yinglei Wang's research adopts a behavioural lens to explore how individuals adapt to and leverage contemporary technologies in organizations, with publications in MIS Quarterly, Journal of Management Information Systems, and Information Systems Journal. He has secured SSHRC funding for digital skills in virtual settings and received the Harrison McCain Emerging Scholar Award (2011) and Faculty Research Excellence Award (2012).

TEF Contribution — His methodological rigour shaped the TEF's empirical design — from the sequential explanatory mixed-methods architecture to the planned CFA, SEM, and LPA analyses — with a strong emphasis on transparency, reproducibility, and behavioural validity.

Program Director & Committee Member
Dr. Wendy R. Carroll
Program Director, Executive DBA (EDBA)
Associate Professor, Department of Management
Sobey School of Business, Saint Mary's University
PhD in Management, Sobey School of Business, Saint Mary's University
Award-winning educator with 20 years of senior leadership experience before academia. Research in workforce strategies, HR management, and evidence-based decision-making.
Wendy Carroll is an award-winning educator and practice-oriented researcher with 20 years of senior leadership experience in national and multinational organizations before entering academia. Her work focuses on workforce strategies, human resource management, employee silence, and evidence-based decision-making. She received the Dr. Geraldine Thomas Education Leadership Award (2018) and was named one of Canada's Top HR Professionals (2016) by Canadian HR Reporter.

TEF Contribution — Her practitioner-scholar perspective ensured the TEF remains grounded in real-world professional decision-making. She bridged abstract theory with practical application, directly influencing the sandbox's design as a usable diagnostic tool for organizational leaders.

External Committee Member
Dr. Denise M. Rousseau
H.J. Heinz II University Professor of Organizational Behavior and Public Policy
Heinz College & Tepper School of Business, Carnegie Mellon University
PhD in Psychology, University of California, Berkeley
Pioneer of psychological contract theory. Past President of the Academy of Management. 220+ articles, 12+ books. Lifetime Achievement from SIOP.
Denise Rousseau is a leading scholar in organizational behaviour, renowned for developing psychological contract theory. She is Chair of Health Care Policy and Management at CMU, Academic Board President of the Center for Evidence-Based Management, and Co-Chair of the Campbell Library's Management and Business Coordinating Group. Author of over a dozen books and 220+ articles and past President of the Academy of Management (2004–2005). Honors include two George Terry Awards, Lifetime Achievement from SIOP, and fellowships in SIOP, APA, AOM, BAM, and the Academy of Social Sciences.

TEF Contribution — Her pioneering work on psychological contracts and evidence-based management provided critical theoretical scaffolding for the TEF, particularly in conceptualizing trust as relational, emergent, and ecologically interdependent.

Doctoral Candidate
Rachel Hor
Architect & Builder
Executive Doctor of Business Administration (EDBA) Candidate
Sobey School of Business, Saint Mary's University
Rachel Hor is an Executive DBA candidate at the Sobey School of Business, Saint Mary's University. Her doctoral research integrates behavioural science, information systems theory, and organizational justice into a unified framework for AI-augmented decision-making across high-stakes organizational contexts.

She brings over two decades of technology leadership across financial services — with deep specialization in insurance — spanning North America, Asia Pacific, and EMEA. Her practitioner background in enterprise transformation, digital platforms, and human-AI system design grounds the TEF in the operational contexts where trust fractures most consequentially: where automated decisions meet human accountability.

The Trust Triad Scale (TTS) instrument is under active development, with a Campbell Collaboration systematic review title registered and protocol under review. As architect of the Trust Ecology Framework and builder of this sandbox, her work operationalizes the committee's scholarly guidance into a diagnostic toolkit designed for researchers and practitioners alike.
TEF Contribution — As architect of the Trust Ecology Framework and builder of this sandbox, her work operationalizes the committee's scholarly guidance into a diagnostic toolkit designed for researchers and practitioners alike.

Processual Realism

Trust is not a static property — it is emergent, relational, and continually reconstructed. Möllering's (2001) suspension model: trust requires a leap that cannot be reduced to calculation or affect alone.

Critical Pragmatism

Knowledge must be actionable for practitioners, not merely internally consistent. Quantitative SEM establishes structure; qualitative narrative inquiry reveals meaning-making that numbers cannot capture.

Sequential-Convergent

Each phase informs the next — systematic gaps → scale items → SEM hypotheses → narrative interpretation — while standing independently. Integration point: the Trust Fracture Profile.

“A processual approach does not ask ‘how much trust exists?’ but ‘how is trust being constructed, maintained, and fractured across the triadic ecology at this moment?’” Hor (2026) · Trust Ecology Dissertation Proposal V14