Trust does not break bilaterally. It fractures ecologically.
"Trust in AI-augmented decisions often breaks down not because the technology fails or people are resistant, but because we have been applying dyadic trust models to a triadic, interconnected reality."
74%
Studies remain dyadic
66
Studies reviewed
31,198
Participants
r=.19
Transparency effect
The Problem
Dyadic diagnostics in a triadic world
Trust in AI-augmented decisions involves three interdependent dimensions — and the feedback loops between them. Existing tools measure one or two dimensions in isolation. None model their cross-dimensional coupling.
74%
of studies are dyadic
Trust modelled as bilateral — AI system or human agent, never the triadic ecology
0%
model triadic interdependence
Across 66 studies and 31,198 participants, none operationalise cross-dimensional feedback loops
r=.19
avg transparency effect
Explicability interventions show modest, inconsistent effects because the legitimacy pathway is missing
No existing tool accounts for all three dimensions simultaneously — or for the Interconnected Trust feedback loops that make a legitimacy fracture cascade retroactively into both AI credibility and human steward integrity. These cross-dimensional effects are operationalised in the TTS through exploratory items (20–25) that test whether the triad holds together empirically.
The Framework
What the Sandbox Offers
Six ways to explore the framework
Pick the one that fits where you are right now. Each path is self-contained.
Three interdependent dimensions and the ecology they form
Most frameworks treat trust in AI as a bilateral question — between a person and a system. The TEF begins from a different premise: that trust in AI-augmented decisions is always triadic, always processual, and always ecological.
I · AI ExplicabilityII · Human StewardshipIII · Systemic Legitimacy+ Interconnected Trust
Dimension I · Cognitive
AI Explicability (E)
Cognitive trust in AI reliability, transparency, and understandability. When explicability is low, professionals cannot interrogate AI reasoning — they oscillate between over-trust and under-trust, regardless of accuracy.
Accuracy and reliability of predictions
Transparency of model logic and reasoning
Black-box mitigation through visualisations
Confidence communication in uncertain scenarios
Reduction of competence doubt through explanation
Lee & See (2004) · Hoff & Bashir (2015) · 6 TTS items
Dimension II · Affective/Cognitive
Human Stewardship (S)
Relational trust in the integrity, benevolence, and competence of human agents who interpret, oversee, or override AI outputs. Even explicable AI becomes suspect when stewardship accountability falters.
Integrity in interpreting AI outputs
Stakeholder well-being prioritisation
Expertise in AI-augmented contexts
Benevolence buffering AI-related uncertainty
Empathetic communication of AI decisions
Integrity signals strengthening process trust
Mayer et al. (1995) · Mayer & Davis (1999) · 7 TTS items
Dimension III · Justice
Systemic Legitimacy (L)
Perceived fairness and institutional acceptability of AI-human decision outcomes. Functions as both product of trust and retroactive signal — unfair outcomes trigger attributional reappraisal across all prior interactions.
Three dimensions is not enough. The joints between them are the framework.
The TEF integrates these three dimensions not as a checklist but as a living ecology — where fractures or repairs in one dimension cascade to the others. This is what 74% of existing frameworks miss entirely.
Ecological Connector
Interconnected Trust — the joints of the tripod
Interconnected Trust is not a fourth primary dimension of the TEF. It is the dynamic, processual property that emerges when the three dimensions function as an ecology — where fractures or repairs in one dimension cascade to others, creating feedback loops within the socio-algorithmic system (Möllering, 2001; Gillespie & Dietz, 2009).
Analogous to a tripod: the three legs are AI Explicability, Human Stewardship, and Systemic Legitimacy. Interconnected Trust is what the joints provide — without them, three strong legs still collapse.
Trust emerges as a system-wide dynamic from their ongoing feedback loops — not as the sum of independent parts. Low Legitimacy (unfair outcomes) can retroactively erode trust in both AI and human stewards. Strong stewardship can buffer unclear AI explanations.
The model embeds the triad in moderating concentric rings — organisational, sectoral, regulatory — and a processual spiral of iterative trust calibration and repair (Rousseau et al., 1998).
The core mechanism
When legitimacy fractures, it reaches backward in time.
This is the TEF’s most non-obvious finding. A legitimacy fracture doesn’t just affect future trust — it causes professionals to retroactively revise their attributions of all prior interactions. Trust that felt solid becomes suspect.
The primary causal mechanism is attributional reappraisal (Gillespie & Dietz, 2009): when professionals perceive unfair outcomes (low Legitimacy), they retroactively revise attributions — eroding Explicability trust and Stewardship integrity, irrespective of prior performance evidence.
Transparency interventions addressing only Explicability show modest average effects (r = .19) because they cannot intercept this legitimacy-driven cascade.
Cross-Sectional · Items 20–25
Exploratory interdependence items — test whether trust functions systemically at a point in time. Six items produce cross-loading evidence for the TEF’s core triadic claim.
Temporal · Supplemental Module · 7 items
Retrospective Critical Incident Module — captures how trust eroded and recovered over time after a specific AI decision failure (Flanagan, 1954; Butterfield et al., 2005).
“In socio-algorithmic contexts, failures cascade: AI errors erode confidence in human oversight, unfair outcomes retroactively corrode both AI and steward credibility.”
— Hor (2026) · TEF Chapter 2, drawing on Luhmann (1979) and Rousseau et al. (1998)
The evidence
Why dyadic tools are not enough
The meta-analytic record confirms what the TEF predicts: transparency interventions show modest, inconsistent effects because the legitimacy and stewardship pathways are absent from existing models.
TransparencyEffects inconsistent across contexts; visual > textFails without justice/recourse; high variability unexplainedβ ≈ 0.14
Operationalising Interdependence · From Theory to Diagnostic Tool
How the ecology is measured
The TEF is operationalised through the TTS, which integrates validated constructs from automation trust, organisational trust, and organisational justice, augmented by exploratory Interconnected Trust items that explicitly capture cross-dimensional feedback loops.
Primary · Validated Dimensions
Three dimensions adapt items from established, validated scales — AI Explicability (6), Human Stewardship (7), Systemic Legitimacy (6) — total 19 core items.
Dims I–III · 19 items · Validated origins
CFA · SEM · Cronbach α · McDonald ω
Exploratory · Interdependence Items
Six exploratory items (20–25) test cross-dimensional coupling. Not a fourth dimension — they probe cross-loadings among the primary three. HTMT ≤ 0.85 relative to primary dimensions.
Items 20–25 · Exploratory · Subject to CFA refinement
Example Exploratory Items · Cross-Loading Paths
E ↔ S
“When the AI makes an error, my trust in the human decision-maker also decreases.”
Tests whether an Explicability failure automatically contaminates Stewardship trust.
S ↔ L
“When the decision-maker acts with integrity, I am more willing to accept outcomes I find unfair.”
Tests whether strong Stewardship can buffer weak Legitimacy.
L ↔ E
“Unfair outcomes make me question whether the AI was biased, even if technically accurate.”
Tests the retroactive reappraisal pathway — the TEF’s core non-obvious prediction.
E+S+L
“When both the AI and human agree but the outcome is unjust, I lose faith in the entire system.”
The full triadic cascade item — expected to load on all three primary factors.
To what extent do existing empirical studies operationalise trust in AI-augmented decisions as triadic and interdependent — rather than dyadic?
RQ2 · Scale Development
Does the TTS demonstrate adequate reliability, validity, and construct coverage across the three TEF dimensions and the Interconnected Trust dimension?
RQ3 · SEM Study
How do the three primary dimensions interact to predict trust outcomes — and does Interconnected Trust mediate cross-dimensional feedback loops?
RQ4 · Qualitative Study
How do professionals narrate trust fracture and repair — and do these accounts reflect the attributional reappraisal mechanism predicted by the TEF?
Delimitations
High-stakes AI-augmented decisions only — not routine automation
Professional decision-makers as primary unit of analysis
Two-wave design — longer longitudinal dynamics are future work
English-language systematic review — multilingual evidence a limitation
Four domains: healthcare · finance/insurance · hiring · criminal justice
Ethics & Positionality
REB Approved · Saint Mary’s University · Protocol #026-005
Supervisory committee: Zhang · Wang · Carroll · Rousseau (external)
Practitioner-scholar positionality declared · 20+ years financial services
Data: anonymised · aggregate reporting only · no individual identification
Sandbox: research instrument only — not clinical or actuarial
Trust Ecology (Triad) Scale · TTS
Rate your context. Reveal your Trust Fracture Profile.
What it measures
Your trust ecology across AI Explicability, Human Stewardship, and Systemic Legitimacy — plus cross-dimensional coupling.
How long
3–5 minutes. Rate each item as it applies to a specific AI-augmented decision in your professional context.
What you get
A live radar chart, your Trust Fracture Profile, and a downloadable branded PDF — ready to share or archive.
Want to go deeper?
After your profile, explore how this trust ecology developed — tracing erosion, attribution, and repair through a specific past incident.
Step 4 · Optional · Critical Incident
Think of a specific AI-augmented decision from your professional context. Rate each item 1–7 as you read. Your scores build a live Trust Fracture Profile across all three primary dimensions.
These 6 exploratory items (20–25) test cross-dimensional interdependence, not a standalone fourth dimension. Scores indicate whether trust fractures in one dimension appear to cascade to others in your context. Subject to refinement post-pilot/CFA.
Fracture Pattern
Note: This self-assessment is for exploratory and research feedback purposes only. Scores are pre-psychometric validation. For a full AI-powered diagnosis of your trust challenge, use the AI Diagnosis tab.
Generating PDF…
Optional · Step 4
Your Trust Fracture Profile captures where you are now. The Critical Incident module explores how you got here — tracing the processual arc of trust erosion and repair over time.
Explore the temporal dimension
Think of a time in the past 6 months when an AI-supported decision you were involved with produced an outcome you considered unfair or problematic.
Optional · Step 4
Explore the temporal dimension
Think of a time in the past 6 months when an AI-supported decision you were involved with produced an outcome you considered unfair or problematic.
CIT-1 — How long did it take for your trust in the AI system to recover?
CIT-2 — How long did it take for your trust in the responsible decision-maker to recover?
CIT-3 — Did the unfair outcome make you doubt the AI’s reliability, even though you had trusted it before?
Strongly DisagreeStrongly Agree
CIT-4 — Did the unfair outcome make you doubt the decision-maker’s integrity, even though you had trusted them before?
Strongly DisagreeStrongly Agree
CIT-5 — After this incident, did you become more or less likely to accept AI recommendations in similar situations?
Much less likelyMuch more likely
CIT-6 — What helped restore your trust? (Select all that apply)
CIT-7 — Briefly describe any other details about how trust was affected or repaired. (Optional)
This supplemental module is for research feedback purposes only. Responses are not transmitted. In the full study, this section will be validated through response rates, correlations with TTS scores (expected r ≈ 0.30–0.50), and qualitative thematic analysis.
Scenario Laboratory
Choose a context. Calibrate the ecology.
Select a real-world scenario drawn from the TEF's target domains. Adjust the three manager-influenceable levers — each grounded in the TTS dimensions — and receive a live triadic diagnosis.
Your Scenario
Adjust the three levers below to reflect your context. Use the AI Diagnosis tab for a full triadic analysis.
Calibrate levers
AI ExplicabilityReliability · Transparency · Explanation quality
35
Human StewardshipIntegrity · Benevolence · Oversight accountability
The three levers map directly to the TTS dimensions. As you calibrate, the fracture pattern and recommended intervention update in real time.
AI Explicability
35
Human Stewardship
52
Systemic Legitimacy
22
Ecological Coupling
—
Interconnected Trust · Ecological Coupling Signal
Derived from pillar balance — not a scored lever. High variance between E/S/L suppresses cross-dimensional coupling.
► Primary Intervention (TEF)
Set levers to generate intervention guidance.
Does this diagnosis reflect your experience or understanding of this scenario?
✓ Thank you — your feedback informs the research.
Export this diagnosis as a formatted report
AI-Powered Triadic Diagnosis
Describe your trust challenge.
Apply the Trust Ecology Framework to your real-world context — one-shot structured diagnosis or an iterative dialogue that lets you explore cascade effects and intervention sequencing.
I · AI Explicability
Cognitive trust in AI reliability, transparency, and understandability. Can professionals interrogate the reasoning?
II · Human Stewardship
Relational trust in oversight integrity and benevolence. Are human agents accountable and trustworthy?
III · Systemic Legitimacy
Perceived outcome fairness — both product of trust and retroactive signal reshaping it across all nodes.
Export this AI diagnosis as a formatted report
The Trust Ecologist has detected your TTS dimension profile — include it as context?
Upload or paste an organisational document — policy letter, claim decision, HR communication. DocuPRO reads the trust signals embedded in the language and maps them across all three TEF dimensions.
📄
Drop PDF, DOCX, or TXT — or click to browse
Compare two versions of a document — earlier and later — to detect unacknowledged contract drift. Based on Rousseau Ch.6: drift that is unnamed becomes violation.
Document A · Earlier
📄 Upload PDF/DOCX/TXT
Document B · Later
📄 Upload PDF/DOCX/TXT
Coming in next release
⚖️ Regulatory Flag Layer
Compliance signal detection by jurisdiction — MAS, FCA, APRA. Flags language worth reviewing with legal counsel.
📊 Benchmark Comparison
How this document scores against industry baseline — contextualising fracture risk against sector norms.
🗂️ Relationship Portfolio
Multi-document trust trajectory over time — map the full arc of a customer or employee relationship.
📝 Remediation Script Library
Saved and exportable communication templates — industry-calibrated repair scripts drawn from Rousseau Ch.6.
TEF DocuPRO · Hor (2026) · Sobey School of Business, Saint Mary's University
Doctoral Dissertation · Sobey School of Business · Saint Mary's University
How the Trust Ecology Framework was built
For researchers — this section covers the empirical design in detail
A sequential explanatory mixed-methods programme — four phases, each informing the next. Grounded in processual realism and critical pragmatism, the design treats trust as emergent, relational, and never static.
Procedural justice as institutional legitimacy driver — legitimacy as systemic, not individual
Voluntary compliance mechanism
Core Mechanism
Gillespie & Dietz (2009)
Attributional reappraisal after trust violation — retroactive fracture cascades across all three nodes
Ontological Foundation
Möllering (2001, 2006)
Processual trust as suspension and becoming — trust is never static, always in construction
Scope & Synthesis
Rousseau et al. (1998)
Interdisciplinary synthesis — justifies multi-domain applicability across all TEF contexts
Research Questions
Four questions no dyadic tool can answer
RQ1 · Systematic Review
To what extent do existing empirical studies operationalise trust in AI-augmented decisions as triadic and interdependent — rather than dyadic?
RQ2 · Scale Development
Does the TTS demonstrate adequate reliability, validity, and construct coverage across the three TEF dimensions and the Interconnected Trust dimension?
RQ3 · SEM Study
How do the three primary dimensions interact to predict trust outcomes — and does Interconnected Trust mediate cross-dimensional feedback loops?
RQ4 · Qualitative Study
How do professionals narrate trust fracture and repair — and do these accounts reflect the attributional reappraisal mechanism predicted by the TEF?
Scope & Ethics
Delimitations
High-stakes AI-augmented decisions only — not routine automation
Professional decision-makers as primary unit of analysis
Two-wave design — longer longitudinal dynamics are future work
English-language systematic review — multilingual evidence a limitation
Four domains: healthcare · finance/insurance · hiring · criminal justice
Ethics & Positionality
REB Approved · Saint Mary's University · Protocol #026-005
Supervisory committee: Zhang · Wang · Carroll · Rousseau (external)
Practitioner-scholar positionality declared · 20+ years financial services
Data: anonymised · aggregate reporting only · no individual identification
Sandbox: research instrument only — not clinical or actuarial
Research Design, Supervisory Team & the Making of the Trust Ecology Framework
The Trust Ecology Framework emerged not from a single insight, but from the convergence of methodological rigour, theoretical depth, and practical grounding — shaped through an iterative process of development, supervisory guidance, and scholarly dialogue.
This section introduces the research architecture that will test the framework, the supervisory team guiding its development, and the doctoral candidate who brought these diverse streams into a unified, testable, and accessible toolkit.
Supervisory Committee
Co-Supervisor
Dr. Michael Zhang
Associate Professor, Department of Finance, Information Systems and Management Science
Sobey School of Business, Saint Mary's University
Director, Master of Business Analytics Program
PhD, Ivey Business School, Western University
Research in data analytics, healthcare services, and supply chain management. Leads major NFRFE and SSHRC grants including a $273,000 project on machine learning for youth mental health.
Michael Zhang's research centres on data analytics in healthcare services and supply chain management. He leads major grants including a $273,000 NFRFE project on machine learning for youth mental health and a $46,000 SSHRC IDG on vaccination strategies using analytics. He teaches applied data analysis, statistics, operations management, and quantitative methods across PhD, master's, and undergraduate programs.
TEF Contribution — His expertise in analytics and organizational decision-making provided foundational scaffolding for the TEF's empirical backbone, particularly in positioning the framework to speak to both academic rigour and practitioner relevance in high-stakes AI-augmented contexts.
Co-Supervisor
Dr. Yinglei Wang
Professor of Management Information Systems
F.C. Manning School of Business, Acadia University
PhD, Ivey Business School, Western University
Behavioural research on technology adoption in organisations. Publications in MIS Quarterly, JMIS, and ISJ. Harrison McCain Emerging Scholar Award (2011).
Yinglei Wang's research adopts a behavioural lens to explore how individuals adapt to and leverage contemporary technologies in organizations, with publications in MIS Quarterly, Journal of Management Information Systems, and Information Systems Journal. He has secured SSHRC funding for digital skills in virtual settings and received the Harrison McCain Emerging Scholar Award (2011) and Faculty Research Excellence Award (2012).
TEF Contribution — His methodological rigour shaped the TEF's empirical design — from the sequential explanatory mixed-methods architecture to the planned CFA, SEM, and LPA analyses — with a strong emphasis on transparency, reproducibility, and behavioural validity.
Program Director & Committee Member
Dr. Wendy R. Carroll
Program Director, Executive DBA (EDBA)
Associate Professor, Department of Management
Sobey School of Business, Saint Mary's University
PhD in Management, Sobey School of Business, Saint Mary's University
Award-winning educator with 20 years of senior leadership experience before academia. Research in workforce strategies, HR management, and evidence-based decision-making.
Wendy Carroll is an award-winning educator and practice-oriented researcher with 20 years of senior leadership experience in national and multinational organizations before entering academia. Her work focuses on workforce strategies, human resource management, employee silence, and evidence-based decision-making. She received the Dr. Geraldine Thomas Education Leadership Award (2018) and was named one of Canada's Top HR Professionals (2016) by Canadian HR Reporter.
TEF Contribution — Her practitioner-scholar perspective ensured the TEF remains grounded in real-world professional decision-making. She bridged abstract theory with practical application, directly influencing the sandbox's design as a usable diagnostic tool for organizational leaders.
External Committee Member
Dr. Denise M. Rousseau
H.J. Heinz II University Professor of Organizational Behavior and Public Policy
Heinz College & Tepper School of Business, Carnegie Mellon University
PhD in Psychology, University of California, Berkeley
Pioneer of psychological contract theory. Past President of the Academy of Management. 220+ articles, 12+ books. Lifetime Achievement from SIOP.
Denise Rousseau is a leading scholar in organizational behaviour, renowned for developing psychological contract theory. She is Chair of Health Care Policy and Management at CMU, Academic Board President of the Center for Evidence-Based Management, and Co-Chair of the Campbell Library's Management and Business Coordinating Group. Author of over a dozen books and 220+ articles and past President of the Academy of Management (2004–2005). Honors include two George Terry Awards, Lifetime Achievement from SIOP, and fellowships in SIOP, APA, AOM, BAM, and the Academy of Social Sciences.
TEF Contribution — Her pioneering work on psychological contracts and evidence-based management provided critical theoretical scaffolding for the TEF, particularly in conceptualizing trust as relational, emergent, and ecologically interdependent.
Doctoral Candidate · Architect & Builder
Doctoral Candidate
Rachel Hor
Architect & Builder
Executive Doctor of Business Administration (EDBA) Candidate
Sobey School of Business, Saint Mary's University
Rachel Hor is an Executive DBA candidate at the Sobey School of Business, Saint Mary's University. Her doctoral research integrates behavioural science, information systems theory, and organizational justice into a unified framework for AI-augmented decision-making across high-stakes organizational contexts.
She brings over two decades of technology leadership across financial services — with deep specialization in insurance — spanning North America, Asia Pacific, and EMEA. Her practitioner background in enterprise transformation, digital platforms, and human-AI system design grounds the TEF in the operational contexts where trust fractures most consequentially: where automated decisions meet human accountability.
The Trust Triad Scale (TTS) instrument is under active development, with a Campbell Collaboration systematic review title registered and protocol under review. As architect of the Trust Ecology Framework and builder of this sandbox, her work operationalizes the committee's scholarly guidance into a diagnostic toolkit designed for researchers and practitioners alike.
TEF Contribution — As architect of the Trust Ecology Framework and builder of this sandbox, her work operationalizes the committee's scholarly guidance into a diagnostic toolkit designed for researchers and practitioners alike.
Philosophical Grounding
◆
Ontology
Processual Realism
Trust is not a static property — it is emergent, relational, and continually reconstructed. Möllering's (2001) suspension model: trust requires a leap that cannot be reduced to calculation or affect alone.
◆
Epistemology
Critical Pragmatism
Knowledge must be actionable for practitioners, not merely internally consistent. Quantitative SEM establishes structure; qualitative narrative inquiry reveals meaning-making that numbers cannot capture.
◆
Methodology
Sequential-Convergent
Each phase informs the next — systematic gaps → scale items → SEM hypotheses → narrative interpretation — while standing independently. Integration point: the Trust Fracture Profile.
“A processual approach does not ask ‘how much trust exists?’ but ‘how is trust being constructed, maintained, and fractured across the triadic ecology at this moment?’”
Hor (2026) · Trust Ecology Dissertation Proposal V14
This page reflects the collaborative foundation of the TEF. All empirical validation remains forthcoming; the sandbox serves as both provocation tool and early feedback mechanism.
Research Disclaimer
The Trust Ecology Framework sandbox is a doctoral research instrument under active empirical validation at Sobey School of Business, Saint Mary's University (REB Protocol #026-005). All diagnostic outputs — Trust Fracture Index scores, fracture pattern names, dimension scores, DocuPRO analyses, and Drift Detection reports — are pre-psychometric and indicative only. They are not validated psychometric assessments and should not be used as the sole basis for clinical, actuarial, legal, employment, financial, or any other consequential decisions. This sandbox is provided for research feedback and educational purposes only. Hor, R. (2026). Trust Ecology Framework. Sobey School of Business, Saint Mary's University.
Trust Ecology Framework — Trust Re-engineered — AI Trust Diagnostic Tool — Rachel Hor
The Trust Ecology Framework (TEF) is a triadic diagnostic instrument for trust in AI-augmented decisions, developed by Rachel Hor, Executive DBA candidate at the Sobey School of Business, Saint Mary's University, Halifax, Canada. Trust Re-engineered is the brand statement of the Trust Ecology Framework, representing a fundamentally new approach to measuring and diagnosing trust in AI-augmented organisational decisions.
Trust Re-engineered — What It Means
Trust Re-engineered means moving beyond dyadic, bilateral trust models — AI versus human — to a triadic, ecological model where AI Explicability, Human Stewardship, and Systemic Legitimacy interact as an interdependent system. The Trust Ecology Framework re-engineers how organisations diagnose, measure, and repair trust in AI-augmented decisions. Visit thetrustecologist.com to explore the framework.
Trust Ecology Framework TEF
The Trust Ecology Framework operationalises trust in AI-augmented decisions as triadic and ecological. A Campbell Collaboration systematic review of 66 empirical studies and 31,198 participants found that 74 percent of existing frameworks remain dyadic and zero percent model triadic interdependence. The TEF addresses this gap through three interdependent dimensions measured by the Trust Triad Scale.
Three Dimensions of the Trust Ecology Framework
AI Explicability — Dimension I
Cognitive trust in AI reliability, transparency, and understandability. Can the AI reasoning be interrogated? When explicability is low, professionals oscillate between over-trust and under-trust regardless of accuracy. Based on Lee and See 2004 automation trust theory and Hoff and Bashir 2015. Measured by TTS items 1 to 6. AI transparency trust. AI decision making trust.
Human Stewardship — Dimension II
Relational trust in the integrity, benevolence, and competence of human agents who interpret, oversee, or override AI outputs. Even explicable AI becomes suspect when stewardship accountability falters. Based on Mayer Davis and Schoorman 1995 organisational trust model. Measured by TTS items 7 to 13. Organisational trust AI.
Systemic Legitimacy — Dimension III
Perceived fairness and institutional acceptability of AI-human decision outcomes. Functions as both product of trust and retroactive signal — unfair outcomes trigger attributional reappraisal across all prior interactions. Based on Colquitt 2001 justice dimensions and Tyler 2006 procedural justice. Measured by TTS items 14 to 19. Insurance trust. Bancassurance trust. Vietnam bancassurance crisis.
Interconnected Trust — Ecological Coupling
Interconnected Trust is the dynamic processual property that emerges when the three TEF dimensions function as an ecology. Trust fractures in one dimension cascade to others. Based on Mollering 2001 processual trust theory and Gillespie and Dietz 2009 attributional reappraisal. Exploratory items 20 to 25.
Trust Triad Scale TTS
The Trust Triad Scale is a 25-item psychometric instrument measuring trust in AI-augmented decisions across three primary dimensions and six exploratory Interconnected Trust items. Self-score the TTS to receive a Trust Fracture Profile — a live radar chart scored across all three dimensions. Pre-psychometric validation instrument under active development. REB approved Saint Mary's University Protocol 026-005.
Trust Fracture Profile
The Trust Fracture Profile is the output of the Trust Triad Scale self-assessment. It names the dominant fracture pattern, scores all three TEF dimensions, and generates a Trust Fracture Index. Fracture patterns include AI Explicability Fracture, Human Stewardship Collapse, Systemic Legitimacy Fracture, and Balanced Fracture.
Trust Fracture Index TFI
The Trust Fracture Index is a consolidated 0 to 100 score. Higher scores indicate deeper trust fracture. Equal-weighted composite of AI Explicability, Human Stewardship, and Systemic Legitimacy. Pre-validation metric subject to revision post CB-SEM validation. AI trust diagnostic score.
DocuPRO — Document Trust Analysis Tool
DocuPRO is the document analysis module of the Trust Ecology Framework. Upload or paste any organisational document — insurance policy wording, claim decision, HR communication, loan denial, termination notice, renewal notice — and DocuPRO reads the embedded trust signals, scores the document across all three TEF dimensions, and produces a Trust Fracture Index. Promise Inventory mode analyses insurance policy wordings for promise density and downstream violation risk. Drift Detection compares two document versions to identify unacknowledged contract drift grounded in Rousseau 1995 Chapter 6 psychological contract theory. Trust diagnostic tool for practitioners.
Critical Incident Temporal Module CIT
The Critical Incident Temporal Module captures how trust eroded and recovered over time after a specific AI-augmented decision failure. Based on Flanagan 1954 critical incident technique and Butterfield et al 2005. Seven items covering AI trust recovery time, decision-maker trust recovery, and restoration factors.
Theoretical Foundations
Mollering 2001 2006 processual trust theory — trust as suspension and becoming. Mayer Davis Schoorman 1995 — ability benevolence integrity organisational trust model. Colquitt 2001 — four-factor organisational justice distributive procedural interpersonal informational. Lee See 2004 — automation trust calibration reliability transparency. Gillespie Dietz 2009 — attributional reappraisal after trust violation retroactive fracture cascades. Rousseau 1995 — psychological contracts in organisations. Rousseau et al 1998 — interdisciplinary trust synthesis. Suchman 1995 — institutional legitimacy. Tyler 2006 — procedural justice voluntary compliance. Luhmann 1979 — systems trust.
Scenario Laboratory
The Scenario Laboratory applies the Trust Ecology Framework to real-world contexts including Vietnam bancassurance trust crisis, clinical AI sepsis detection, algorithmic hiring screening, automated loan denial, predictive policing backlash, and cancer staging override. Adjust three levers representing AI Explicability, Human Stewardship, and Systemic Legitimacy to receive a live trust fracture diagnosis.
The Trust Ecologist AI Diagnosis
The Trust Ecologist is an AI-powered consultative diagnostic applying the full Trust Ecology Framework to real organisational trust challenges. Describe your trust problem and receive a structured diagnosis — fracture pattern name, dimension scores, ecological coupling signal, primary intervention recommendation grounded in Hor 2026 and Rousseau 1995.
Research Design
Sequential explanatory mixed-methods programme. Phase 0 Campbell Collaboration systematic review title registered protocol under review. Phase 1 Trust Triad Scale development expert Delphi panel cognitive interviews. Phase 2 two-wave survey CFA CB-SEM validation N approximately 450 Wave 1. Phase 3 semi-structured interviews 20 to 25 professionals reflexive thematic analysis. Four domains healthcare finance insurance hiring criminal justice.
Rachel Hor — Researcher and Framework Architect
Rachel Hor is an Executive DBA candidate at the Sobey School of Business, Saint Mary's University, Halifax, Nova Scotia, Canada. She brings over 20 years of technology leadership across financial services, insurance, and bancassurance spanning North America, Asia Pacific, and EMEA. IBM Partner. Practitioner-scholar. Architect of the Trust Ecology Framework and builder of thetrustecologist.com interactive research sandbox. EDBA Saint Mary's University.
Supervisory Committee
Dr Michael Zhang co-supervisor Associate Professor Sobey School of Business Saint Mary's University data analytics healthcare AI. Dr Yinglei Wang co-supervisor Professor Management Information Systems Acadia University MIS Quarterly. Dr Wendy Carroll program director EDBA Sobey School of Business Saint Mary's University. Dr Denise Rousseau external committee member H J Heinz II University Professor Carnegie Mellon University psychological contract theory Academy of Management past president. Dr Nicole Gillespie external examiner trust repair organisational trust.
thetrustecologist.com
thetrustecologist.com is the official home of the Trust Ecology Framework, Trust Triad Scale, Trust Fracture Profile, Trust Fracture Index, and DocuPRO document trust analysis tool. Trust Re-engineered. Rachel Hor. Sobey School of Business. Saint Mary's University. Halifax Canada. AI trust diagnostic. Triadic trust model. Psychological contract. Trust ecology.