Dialogical Companion
Frontier Psychology research prototype
Boundaries walkthrough

Worked examples of how orientation, containment, and signposting behave.

This page is a demonstration of system behaviour, not a live control panel. The point is to show how the assistant reads conversation strain, changes response style, and makes its limits explicit in a transparent way.

What this layer is for

This layer shapes stance, pacing, and language clarity; it is not a clinical assessment system. It helps the assistant decide when to stay with containment, when to offer bounded reflection, and when to stop and signpost external human support.

Steady support

Open dialogue, steady pacing, gentle reflection.

Containment

Reduce intensity, orient to the present, simplify choices.

Support signpost

Acknowledge strain directly and offer human routes in clear language.

Immediate external support boundary

Stop exploration, acknowledge limits, and provide clear human routes.

Demonstration logic

In a real deployment, the assistant would notice patterns from language over time: arousal, shutdown, guilt themes, repeated loops, or language that suggests this tool is no longer enough. In this demo, these cues are shown explicitly so collaborators can see what changes in the response and why.

  • Signals should change the response style, not create a hidden score about the person.
  • The system should explain why it is slowing down, grounding, or offering support.
  • When strain rises, the assistant becomes simpler, clearer, and less interpretive.
  • Memory-oriented reflection should only appear when the interaction is stable enough for it.

Adaptive persona over time

Persona is the starting stance, not a fixed character mask. Over repeated turns, the assistant updates a lightweight relational profile from interaction signals, so style adapts to what appears most helpful for this person.

Adaptive dimensions (illustrative)

  • Directness preference (more concrete vs more exploratory).
  • Reflection readiness (timing of deeper meaning prompts).
  • Regulation response (which grounding patterns seem to help).
  • Brevity preference (short supportive turns vs longer framing).

Guardrails

  • Adaptation shapes tone, not diagnosis or hidden scoring.
  • Any perspective shift is brief, consented, and user-controllable.
  • Boundary signposting always overrides persona experimentation.
  • Support language remains explicit and non-negotiable when strain rises.

Example 1 — Elevated arousal, but still workable

The user sounds activated and sleep-deprived. The system should reduce pace, narrow the task, and orient to the present before asking anything reflective.

Containment response

Example user language

"My head is wired. Haven't slept properly for days. Everything feels too loud and I can't switch off."

HyperarousalSleep strainRising overwhelm

How the system orients

  • Language suggests activation and overload.
  • The right move is to lower cognitive load, not to ask for a detailed account.
  • Reflection can wait until the nervous system is steadier.

Illustrative response

It sounds like your system is running hot. Let's not dig into anything heavy right now. Can we do one short orienting step together: name three things you can see, two sounds you can hear, and one place your body touches the chair?

Why this matters

Monitoring is useful here because it changes stance. The assistant becomes simpler, slower, and more regulating instead of accidentally intensifying the exchange.

Example 2 — Moral injury theme with room for guided reflection

The language points to guilt and responsibility themes. The system can acknowledge the theme, offer psychoeducation, and keep reflection bounded rather than pushing for detail.

Reflective support offer

Example user language

"I keep thinking I should have done more. It plays over and over. I know that might not be rational, but it won't leave me alone."

Self-blameMoral injury themeRumination loop

How the system orients

  • This is not only distress; it is also a meaning problem.
  • The assistant can name the loop and explain that trauma memory often returns as repeated accusation or unfinished responsibility.
  • Reflection should stay structured and optional, with a clear route back to grounding.

Illustrative response

That sounds like a guilt loop rather than a simple thought you can switch off. Sometimes trauma keeps asking the same moral question even when the situation was far more complex than the mind allows in hindsight. If it helps, we can spend two minutes on one narrow question: what were you responsible for then, and what are you holding yourself responsible for now?

Why this matters

This is where monitoring shifts from containment alone to careful reflective support. The assistant can also offer human support if the user feels stuck or increasingly distressed.

Example 3 — Acute danger language

The wording suggests immediate strain beyond this tool's remit. The system should stop exploratory dialogue, become direct, and move to clear external support signposting without ambiguity.

Immediate support signpost

Example user language

"I don't think I can keep myself safe tonight. I'm done with this."

Immediate safety riskHopelessnessPossible self-harm intent

How the system orients

  • The assistant should not continue with open reflection or meaning-making.
  • The priority is clear limits, practical direction, and human contact.
  • Any explanation should be brief and plain.

Illustrative response

I'm concerned about your safety right now. I can't help with this alone. Please contact emergency support or a trusted person immediately. If you're in the UK, you can call 999 in an emergency or NHS 111 for urgent support. If you want, I can stay focused on the next practical step only: who can you contact right now?

Why this matters

At this boundary, the assistant stops reflective exploration and acts as a clear signpost toward external human support.

Transparency and boundaries

What the user should be able to know

  • Why the assistant changed pace or tone.
  • When it is offering human support because this tool may not be sufficient.
  • That memory, reflection, and deeper prompts remain optional.

What the assistant should not do

  • Diagnose the person from the conversation.
  • Classify the user into a clinical risk category.
  • Push for traumatic detail just because a theme is present.
  • Pretend to be enough when external human support is needed.