We’ll take you straight to the right place.
Trust in AI-augmented decisions involves three interdependent dimensions — and the feedback loops between them. Existing tools measure one or two dimensions in isolation. None model their cross-dimensional coupling.
No existing tool accounts for all three dimensions simultaneously — or for the Interconnected Trust feedback loops that make a legitimacy fracture cascade retroactively into both AI credibility and human steward integrity. These cross-dimensional effects are operationalised in the TTS through exploratory items (20–25) that test whether the triad holds together empirically.
Pick the one that fits where you are right now. Each path is self-contained.
Most frameworks treat trust in AI as a bilateral question — between a person and a system. The TEF begins from a different premise: that trust in AI-augmented decisions is always triadic, always processual, and always ecological.
Cognitive trust in AI reliability, transparency, and understandability. When explicability is low, professionals cannot interrogate AI reasoning — they oscillate between over-trust and under-trust, regardless of accuracy.
Relational trust in the integrity, benevolence, and competence of human agents who interpret, oversee, or override AI outputs. Even explicable AI becomes suspect when stewardship accountability falters.
Perceived fairness and institutional acceptability of AI-human decision outcomes. Functions as both product of trust and retroactive signal — unfair outcomes trigger attributional reappraisal across all prior interactions.
The ecology they form
The TEF integrates these three dimensions not as a checklist but as a living ecology — where fractures or repairs in one dimension cascade to the others. This is what 74% of existing frameworks miss entirely.
Interconnected Trust is not a fourth primary dimension of the TEF. It is the dynamic, processual property that emerges when the three dimensions function as an ecology — where fractures or repairs in one dimension cascade to others, creating feedback loops within the socio-algorithmic system (Möllering, 2001; Gillespie & Dietz, 2009).
Analogous to a tripod: the three legs are AI Explicability, Human Stewardship, and Systemic Legitimacy. Interconnected Trust is what the joints provide — without them, three strong legs still collapse.
Trust emerges as a system-wide dynamic from their ongoing feedback loops — not as the sum of independent parts. Low Legitimacy (unfair outcomes) can retroactively erode trust in both AI and human stewards. Strong stewardship can buffer unclear AI explanations.
The model embeds the triad in moderating concentric rings — organisational, sectoral, regulatory — and a processual spiral of iterative trust calibration and repair (Rousseau et al., 1998).
The core mechanism
This is the TEF’s most non-obvious finding. A legitimacy fracture doesn’t just affect future trust — it causes professionals to retroactively revise their attributions of all prior interactions. Trust that felt solid becomes suspect.
The primary causal mechanism is attributional reappraisal (Gillespie & Dietz, 2009): when professionals perceive unfair outcomes (low Legitimacy), they retroactively revise attributions — eroding Explicability trust and Stewardship integrity, irrespective of prior performance evidence.
Transparency interventions addressing only Explicability show modest average effects (r = .19) because they cannot intercept this legitimacy-driven cascade.
“In socio-algorithmic contexts, failures cascade: AI errors erode confidence in human oversight, unfair outcomes retroactively corrode both AI and steward credibility.” — Hor (2026) · TEF Chapter 2, drawing on Luhmann (1979) and Rousseau et al. (1998)
The evidence
The meta-analytic record confirms what the TEF predicts: transparency interventions show modest, inconsistent effects because the legitimacy and stewardship pathways are absent from existing models.
Think of a specific AI-augmented decision from your professional context. Rate each item 1–7 as you read. Your scores build a live Trust Fracture Profile across all three primary dimensions.
Select a real-world scenario drawn from the TEF's target domains. Adjust the three manager-influenceable levers — each grounded in the TTS dimensions — and receive a live triadic diagnosis.
Apply the Trust Ecology Framework to your real-world context — one-shot structured diagnosis or an iterative dialogue that lets you explore cascade effects and intervention sequencing.
Upload or paste an organisational document — policy letter, claim decision, HR communication. DocuPRO reads the trust signals embedded in the language and maps them across all three TEF dimensions.
A sequential explanatory mixed-methods programme — four phases, each informing the next. Grounded in processual realism and critical pragmatism, the design treats trust as emergent, relational, and never static.
To what extent do existing empirical studies operationalise trust in AI-augmented decisions as triadic and interdependent — rather than dyadic?
Does the TTS demonstrate adequate reliability, validity, and construct coverage across the three TEF dimensions and the Interconnected Trust dimension?
How do the three primary dimensions interact to predict trust outcomes — and does Interconnected Trust mediate cross-dimensional feedback loops?
How do professionals narrate trust fracture and repair — and do these accounts reflect the attributional reappraisal mechanism predicted by the TEF?
TEF Contribution — His expertise in analytics and organizational decision-making provided foundational scaffolding for the TEF's empirical backbone, particularly in positioning the framework to speak to both academic rigour and practitioner relevance in high-stakes AI-augmented contexts.
TEF Contribution — His methodological rigour shaped the TEF's empirical design — from the sequential explanatory mixed-methods architecture to the planned CFA, SEM, and LPA analyses — with a strong emphasis on transparency, reproducibility, and behavioural validity.
TEF Contribution — Her practitioner-scholar perspective ensured the TEF remains grounded in real-world professional decision-making. She bridged abstract theory with practical application, directly influencing the sandbox's design as a usable diagnostic tool for organizational leaders.
TEF Contribution — Her pioneering work on psychological contracts and evidence-based management provided critical theoretical scaffolding for the TEF, particularly in conceptualizing trust as relational, emergent, and ecologically interdependent.
“A processual approach does not ask ‘how much trust exists?’ but ‘how is trust being constructed, maintained, and fractured across the triadic ecology at this moment?’” Hor (2026) · Trust Ecology Dissertation Proposal V14