How it works
One process, four perspectives.
The same engagement looks different from each chair. Each role has a clear job, and every AI contribution stays inside human accountability.
Learner
-
Read the situation
Each engagement opens with a real professional situation drawn from the field. The learner names what they see before receiving support.
-
Act in hybrid dialogue
The learner responds to an AI actor inside the scenario: customer, patient, supervisor, or another role defined by the field expert.
-
Reflect, then re-attempt
The sparring partner opens reflective questions. The learner sees the thread, understands the evidence, and can try again.
Trainer
-
Assign validated scenarios
The trainer chooses which field-expert-validated scenarios fit the cohort right now.
-
Watch engagements unfold
The trainer can drop into dialogue or review after the fact, with the full thread and AI signals visible.
-
Render the verdict
For every competency signal, the trainer accepts, adjusts, or rejects. Only that verdict updates the learner record.
Field expert
-
Draft a scenario
The expert authors the situation, actors, phases, competencies, and success criteria in field-specific language.
-
Validate as a peer
AI-suggested and human-drafted scenarios pass through the same expert gate before learners face them.
-
Improve without disruption
Substantive edits fork to a new version so in-flight engagements keep their original scenario intact.
Org admin
-
Hold the org view
Admins see capability development across fields and cohorts from validated evidence only.
-
Coordinate fields
They manage roles, field structures, and the library of validated scenarios.
-
Retire and audit
Scenario status changes remain traceable from draft through live use and retirement.
Want to see it run?
We are running pilots with vocational organisations in the EU. Reach out and we will walk through your fields together.