Externalising Cognition with AI

Using ChatGPT as an applied cognitive enhancement system.

Most AI publications focus on speed and output quality. This piece makes a different observation: that the more consequential shift may lie in how cognitive judgement is supported. What follows describes the process that emerged and why it suggests the potential for enhanced professional decision-making and accountability under uncertainty, rather than focusing on AI’s role in simply generating answers.

I’ve been using ChatGPT for some time in fairly conventional ways.

  • As a design counterpart to explore ideas and challenge assumptions
  • As a coding assistant for drafting and debugging
  • As a callable API to generate drafts, options, or variations
  • As a prompt engine to formalise reusable patterns
  • As a content generator for first passes that I would then refine
  • As a tool to focus a detailed career history against a specific job description

All practical. All useful. All fundamentally transactional.

Prompt in. Response out. Move on.

Recently, something different happened. Not faster delivery or better wording, but a change in how the interaction itself functioned.

The value was in the movement, not the answers

In this piece of work, insight did not arrive as a generated response. It emerged through progression — step by step — as each exchange revealed a constraint, implication, or alternative that reshaped the next question.

The AI did not supply a conclusion. It altered the path that led to one.

The position that eventually emerged was not one I had articulated beforehand, and quite possibly not one I would have reached alone. The contribution was not information, but assisted traversal of a problem space — keeping the thinking moving when it would otherwise stall, loop, or prematurely close.

As the reasoning progressed, one practical problem surfaced that required a concrete response: how to reliably recall and articulate complex experience under pressure.

That question became explicit — how do you support recall when time, stress, and scrutiny are working against you? — and it shaped the direction of the thinking that followed.

In response, I deliberately applied Tony Buzan’s principles on memory and associative structuring as a practical means of supporting retrieval under pressure (Buzan, 1974). The thinking journey ultimately resolved into an IMRS — an Interview Management & Response System — with those principles embedded by design as the concrete outcome of the work.

This felt meaningfully different from both automation and augmentation as they are usually described.

A clarifying analogy

The closest analogy comes from Star Trek.

Spock interrogates the ship’s computer about a transport manoeuvre under unprecedented conditions. The computer does not decide. It extrapolates possibilities, limits, and probabilities.

Spock remains responsible for judgement.

What matters is not that the computer provides an answer, but that its responses reshape Spock’s understanding of what is feasible, what is risky, and what remains uncertain. The reasoning emerges through interaction.

That is what this felt like in practice. Not delegation of thinking, but thinking made possible through participation.

This may not be new

It’s entirely possible that this isn’t novel.

Many people may already be working this way without naming it. In hindsight, it resembles familiar practices: explaining a problem aloud and discovering the flaw mid-sentence; pair programming where the presence of another person changes the solution; facilitated discussions where insight emerges through dialogue rather than analysis alone.

What may be different here is that the counterparty is always available, holds context across turns, and responds without social pressure, status dynamics, or fatigue.

Whether that constitutes something new, or simply a familiar cognitive pattern with a new interface, is an open question.

Where this shows up in practice

This mode becomes visible when progress comes from movement through a problem rather than convergence on a predefined answer.

You see it in:

  • senior decision-making under uncertainty
  • system or organisational design where each choice reshapes the constraints
  • sense-making in ambiguous situations with conflicting narratives
  • professional judgement where confidence exists but clarity does not

In these contexts, AI isn’t valuable because it is “right”. It’s valuable because it keeps the thinking moving — exposing implications, second-order effects, and blind spots as the reasoning unfolds.

The conclusion does not arrive at the end, as it typically does in transactional work. It emerges along the way.

What this is — and isn’t

This is not about AI “thinking”, and it is not about replacing expertise.

It aligns closely with long-established work on distributed cognition and reflective practice, which shows that reasoning often occurs across people, artefacts, and environments rather than inside a single mind (Hutchins, 1995).

This view of reasoning as something that emerges through dialogue and iteration rather than analysis alone is central to Schön’s work on reflective professional practice (Schön, 1983).

Large language models don’t change those fundamentals. They simply make this kind of interaction easier to sustain without another human present.

Responsibility for direction, judgement, and stopping criteria remains entirely human. Without that, the interaction degrades into fluent but shallow pattern completion — something most of us have already learned to recognise.

Used with restraint, however, the system functions less like a tool and more like a cognitive surface: something you think across, rather than something you query.

The practical implication for professional decision-making is not improved answers, but improved judgement formation. By enabling reasoning to progress incrementally — exposing constraints, second-order effects, and implicit assumptions — this mode of interaction can support more disciplined decisions under uncertainty. Used properly, it offers a way to focus on the right moments in a decision-making, surface accountability, and make the basis for judgement more explicit, rather than accelerating towards premature conclusions.

Closing thought

I’m not claiming this is new. For me, it was a personal light-bulb moment.

It does describe a mode of working that feels materially different from prompt-response generation — one where insight emerges through participation rather than extraction.

Whether that distinction holds up under wider scrutiny remains to be seen. It feels precise enough to warrant careful articulation.

Best regards RichFM

References

  • Hutchins, E. (1995). Cognition in the Wild. MIT Press.
  • Schön, D. A. (1983). The Reflective Practitioner. Basic Books.
  • Buzan, T. (1974). Use Your Head. BBC Books.

#AppliedAI #CriticalThinking #DecisionMaking #SystemsThinking #ProfessionalJudgement #Ghostgen.AI

About the author: admin

Leave a Reply

Your email address will not be published.