Clarity
Do people know what is being adopted and why — is the purpose articulated beyond "stay competitive"?
anchored in- AI transparency / job-role-clarity research — ScienceDirect, 2026 — the link between purpose articulation and adoption fidelity
Six dimensions of AI adoption, each anchored in research. The diagnostic measures the leader's read on each; the bearing names the pattern they're sitting in.
The 6 C's is HBAI's framework for naming where an organisation sits in its AI adoption. It draws on BIT AI ADOPT, ScienceDirect, Nature, BCG/MIT and adjacent literature, but the synthesis — the six axes, their thresholds, the named patterns they resolve into — is ours.
Do people know what is being adopted and why — is the purpose articulated beyond "stay competitive"?
anchored inDo people have actual hands-on skills, not just training hours on a slide?
anchored inDo people believe they can succeed with these tools, or are they anxious / quietly disengaging?
anchored inDoes the environment reward AI-augmented behaviour, or punish it (overtly or quietly)? Psychological safety to experiment?
anchored inIs the pace of change matched to the org's actual capacity to absorb it? Or paralysed / over-rushed?
anchored inIs impact being measured honestly — including the human costs — or is the org performing success?
anchored inv1.0 · 2026 · revision-pending
This page is being expanded with full citations and per-dimension question rationales. If you'd like the current draft of the methodology, ask.