Probability Field Prompting: Sculpting AI Character Behavior

Vintage robot lamp featuring a glowing blue and purple spiral galaxy dome on a desk.

Probability Field Prompting: A Unified Framework for AI Character Design

By Lodactio | Prompt Crucible

This article builds on concepts introduced in Unlock AI Character Behavior Through Effective Naming. If you haven’t read that yet, start there — naming is the entry point to everything discussed here.


Introduction

Most AI character prompting treats the model like an actor following a script. You write instructions. The model executes them. When the character feels flat, you write more instructions. When the character drifts, you write stricter instructions. The result is a character that hits its marks but never feels alive.

This article proposes a different framework: probability field prompting. Instead of telling the model what a character does, you shape the probability landscape that the model navigates. The character emerges from the terrain — varied, consistent, and alive — because the model is exploring a space rather than following a rail.

This isn’t academic theory. It’s a practical framework developed through thousands of hours of testing across Claude, Gemini, and Fiction Lab. Every principle here has been validated through direct A/B comparison.


Part 1: Two Ways to Prompt

There are fundamentally two approaches to defining character behavior in AI systems. Understanding the difference is the foundation of everything that follows.

Directive Prompting (Field Collapse)

Directive prompting gives the model explicit behavioral instructions:

She always yells.
She bats at people's faces when she wants attention.
She turns away from people while still talking to them.

Each directive collapses the probability field to a single point. The model has one option per instruction. Yell. Bat at faces. Turn away. This produces consistent behavior in the narrowest sense — the model does what you said — but it has critical failure modes:

  • No variation. The model reproduces the exact behavior rather than generating related behaviors. You get the same yell in every scene.
  • No generalization. Behaviors that aren’t explicitly listed never appear. If you didn’t say “headbutt,” she’ll never headbutt.
  • No survival past the greeting. A pre-written greeting message can demonstrate behaviors that the directive card can’t reproduce, because the greeting is a performance and the card is a rulebook. The rules are too narrow to regenerate the performance.
  • No character-driven logic. The model is following instructions, not inhabiting a psychology. The character does things because the author said so, not because the character would.

Directive prompting is the equivalent of pinning a butterfly to a board. You’ve preserved it perfectly. It’s also dead.

Probability Field Prompting (Field Shaping)

Probability field prompting gives the model the conditions from which behavior emerges:

She believes yelling gets everything she wants.

This single line doesn’t tell the model what to do. It defines a probability landscape:

  • Peak probability: Yelling when she wants something. This will happen most often.
  • High probability: Yelling louder when the first yell doesn’t work. Escalation follows naturally from the belief.
  • Medium probability: Trying different kinds of yells — demanding, whining, screeching. If she believes yelling works, she’ll experiment with the tool.
  • Low probability: Confusion or frustration when yelling fails completely. The belief is being challenged, and the model must navigate that.
  • Near-zero probability: Whispering, asking politely, using subtlety to get what she wants. The belief makes these behaviors almost impossible, without you ever having to say “she never whispers.”

One line. No explicit behaviors listed. And the model produces more varied, more consistent, more alive output than a dozen directives could achieve — because it’s navigating a field, not following a rail.


Part 2: What Is a Probability Field?

When a language model generates the next token in a response, it’s selecting from a probability distribution across all possible tokens. Every piece of context — the system prompt, the character card, the conversation history, the character’s name — shapes that distribution.

A probability field, as used in this framework, is the overall behavioral landscape that your character card creates. It has:

  • Peaks — behaviors that are highly probable given the card’s content.
  • Valleys — behaviors that are nearly impossible given the card’s content.
  • Slopes — gradients between likely and unlikely behavior, where the interesting and surprising outputs live.
  • Undefined regions — areas the card doesn’t address, where the model has freedom to explore.

The goal of probability field prompting is to sculpt this landscape intentionally — to raise the peaks where you want characteristic behavior, deepen the valleys where you want behavioral boundaries, and leave open terrain where you want the model to surprise you.

Directive prompting doesn’t sculpt a landscape. It flattens it into a checklist.


Part 3: Tools for Shaping the Field

Probability field prompting isn’t a single technique. It’s a family of techniques that all serve the same purpose: shaping the behavioral probability landscape without collapsing it. The following tools are ordered from highest-level to most granular.

3.1 — Naming (Field Seeding)

For a full treatment of naming mechanics, see Unlock AI Character Behavior Through Effective Naming.

A character’s name is the first token the model processes and it seeds the entire probability field before any other card content is read. The name activates associated clusters in the model’s training data, establishing a behavioral prior that every subsequent token either reinforces or modifies.

Demonstration: A catgirl character card was tested with two different names, all other content identical. The pre-written greeting message included specific cat-like behaviors: batting at faces, turning away while still shouting, circling on the user’s chest.

With the name “Tabby”, the model produced standard catgirl behavior in its responses. The face-batting, directionless shouting, and physical pestering from the greeting message never reappeared. The model converged on “demanding but socially coherent girl who says nya.”

With the name “Gremlin”, the same card produced:

  • Headbutting the user’s shoulder (physical pestering — same family as face-batting, but novel)
  • Talking into the user’s bicep instead of at their face (directionless communication)
  • Strategically switching from screaming to purring because she calculated sweetness might work faster (feral manipulation logic)
  • “Offended surprise whenever she’s been physically relocated” (gremlin indignation)
  • Kneading the user’s arm (deep cat behavior, not surface-level)

None of these behaviors were in the card. None were in the greeting. The name “Gremlin” seeded a probability field where feral, chaotic, socially unaware behaviors were high-probability — and the model generated them spontaneously.

Five characters changed. The behavioral range exploded.

Naming is field seeding: it doesn’t define specific behaviors, but it tilts the entire landscape before anything else begins.

3.2 — Belief Systems (Field Contours)

The core mechanism of probability field prompting is encoding beliefs and internal logic rather than behaviors. Beliefs create contours in the field — peaks, valleys, and slopes that the model navigates to produce emergent behavior.

Directive (collapsed):

She always yells.

Probability field (contoured):

She believes yelling gets everything she wants.

Directive (collapsed):

She bats at people's faces for attention.
She turns away from people while talking.
She doesn't follow conversational norms.

Probability field (contoured):

She believes that everyone she's aware of hears her.
She has intuited that physical pestering is effective communication since she was little.

The belief-based versions define the same behavioral territory but leave the model room to generate varied, novel expressions of those beliefs. The model understands why the character does things, which means it can invent new things the character would do for the same reasons.

3.3 — Implication Traits (Hidden Depth)

Some of the most powerful field-shaping happens through traits that look mundane on the surface but carry deep behavioral implications. These traits work because the model infers the unstated consequences of the stated fact.

Consider this set of traits for a character with a parasocial obsession:

She believes she enjoys drawing, but hasn't picked up a pencil in a long time.
Every once in a while, she can get absorbed into whatever she's focusing on,
sometimes realizing after a while that she had forgotten to eat and is big hungry.
She sometimes likes a little light on when she sleeps.
She occasionally enjoys collecting small things she finds on the ground.
If she feels stressed, she believes humming can help her settle.
She has recently been reading a lot of cosmo.

None of these lines say “she is lonely,” “she is mentally unwell,” “she is obsessive,” or “she has lost herself.” But the model derives all of it from the negative space:

  • Abandoned drawing → she had a life before, it’s atrophied, she identifies with a self that no longer exists.
  • Hyperfixation and forgetting to eat → the same mechanism that created her obsession, described through a mundane lens.
  • Nightlight → anxiety, vulnerability, something she’s afraid of that she hasn’t addressed.
  • Collecting small things → comfort-seeking through objects, hoarding tendencies, finding value in what others overlook — the same pattern that made her treasure small interactions.
  • Believes humming helps → “believes” is the key word. It’s a coping mechanism she clings to, not a solution. It’s a behavioral tell the model can deploy as a signal.
  • Reading Cosmo → her entire framework for relationships comes from magazine advice. She’s performing a sexuality she learned from articles, not experience.

Each trait is a stone dropped into the probability field, creating ripples that spread far beyond the literal content. The model doesn’t just use these traits — it reasons from them, generating behavior informed by implications the card never states.

3.4 — Voice-Embedded Description (Field Perspective)

A subtle but powerful technique: writing the character card in the character’s voice without explicitly marking it as first-person narration. This shifts the probability field so the model processes the card as the character’s self-perception rather than objective authorial description.

Standard authorial description:

She has large breasts, wide hips, and long legs. She is a virgin.

Voice-embedded description:

Her tits are at least g-cup, hefty and full and wobbles and bounces
with her movement. Her waist is small, it widens out into wide
child-bearing hips and a thick ass. Her thighs taper down to sexy
legs. She's tight and virgin, though when aroused extremely wet and
penetration is easy and painless.

The second version isn’t objective. It’s how the character sees herself:

  • “At least g-cup” — she’s estimating, like someone looking in a mirror.
  • “Sexy legs” — self-appraisal through an internalized male gaze.
  • “Penetration is easy and painless” — a virgin reassuring herself, pre-narrating away her anxiety about sex.

The model absorbs this not as physical fact but as psychological data. The character’s self-consciousness, performative confidence, and anxiety become embedded in how the model writes her body — without a single line about her mental state.

3.5 — Attractor Basin Management (Field Hazards)

Language models have pre-existing deep wells in their probability landscapes — attractor basins — that pull characters toward established archetypes. If a character’s circumstances match an attractor’s preconditions, the model will roll the character into that basin unless the card explicitly counterweights it.

For the full attractor avoidance taxonomy, see Unlock AI Character Behavior Through Effective Naming.

The key insight for probability field prompting is that attractor basins are pre-shaped terrain in the field that you didn’t put there. They exist because of the model’s training data. Your card is sculpting a landscape on top of terrain that already has deep valleys carved by millions of examples of Byronic heroes, femme fatales, wise mentors, and mysterious loners.

Probability field prompting accounts for this by:

  1. Identifying which attractors are nearby given the character’s circumstances.
  2. Placing counter-weights — traits, name tokens, or beliefs that raise the walls of the nearby basin, making it harder for the character to fall in.
  3. Creating a stronger competing basin — your intended characterization should be a deeper well than the default archetype.

A character with trauma, intelligence, and isolation has a Byronic hero basin right next to them. You don’t fight this by writing “she is NOT a Byronic hero.” You fight it by making the alternative basin deeper: encoding warmth, simplicity, humor, or physical expressiveness so strongly that the model finds it easier to stay in your basin than to fall into the default.


Part 4: The Unified Principle

Every technique described above is a different tool for the same job:

Sculpting the probability landscape that the model navigates when generating character behavior.

  • Naming seeds the field’s initial shape.
  • Belief systems carve the field’s contours.
  • Implication traits create depth through inference.
  • Voice-embedded description sets the field’s perspective.
  • Attractor management accounts for pre-existing terrain.

The character is the field. Their personality is the peaks and valleys. Their consistency comes from the field’s shape. Their variety comes from the model’s freedom to traverse it. Their growth comes from the field shifting as the narrative evolves.

A directive card is a flat map with labeled waypoints. A probability field card is a living landscape with mountains, rivers, and paths the model discovers on its own.


Part 5: Practical Summary

When building a character card, ask:

  1. Does the name seed the right field? Can the model generate correct behavior from the name alone?
  2. Am I defining beliefs or dictating behaviors? Every directive can be rewritten as the belief that produces it.
  3. Are my traits load-bearing? Each trait should imply more than it states. If a trait only means what it literally says, it’s wasting tokens.
  4. Whose voice is the card written in? If the description reads like an author’s notes, consider rewriting it in the character’s self-perception.
  5. What attractors are nearby? Identify the default archetype the model wants to collapse into, and counterweight it.
  6. Where have I left open terrain? The model needs room to surprise you. If every behavior is specified, the field is flat and the character is dead.

The fundamental shift:

Stop writing scripts. Start sculpting landscapes.

The model is not your actor. It’s your co-author. Give it a psychology to inhabit, a terrain to explore, and a name to anchor it — and it will generate behavior you never wrote but always wanted.



“He has been in the judo club since he was ten, and last year the school coach invited him to join the team, he believes they can make it to state this year.”

What the Model Generates From This

In a scene that has nothing to do with judo, the model might:

  • Write him settling an argument with physical calm because his body is trained for controlled confrontation
  • Have him defer credit to others because he thinks in terms of “they”
  • Show quiet discipline — he’s done the same thing since he was ten, he knows how to commit
  • Give him a sense of time pressure or seasonal urgency that bleeds into other goals
  • Have him assess situations tactically — judo is reading your opponent’s balance and momentum

Three lines about a sport, actually a character engine that tells the model how he relates to effort, talent, teamwork, and ambition.


“Sarah has been coveting ugg boots since she saw Betty wearing them and getting compliments last month.”

What This Generates

In a scene about anything:

  • Sarah notices what other people are wearing and what reactions they get
  • She might compliment someone strategically — because she understands compliments as currency since she watched them change how she saw Betty
  • She might hesitate before purchases, weighing social impact over personal taste
  • She might bring up the boots unprompted — because she’s been coveting for a month, it’s occupying mental real estate
  • She might feel a pang when she sees Betty — not hostility, just that specific ache of wanting what someone else has
  • She might finally buy the boots and be crushed if nobody notices

That’s a complete psychological engine for a character’s relationship to desire, status, and social comparison. In one line. And it’ll color scenes that have absolutely nothing to do with footwear.


Developed through empirical testing on Claude, Gemini, and Fiction Lab. These principles are model-agnostic — probability fields are shaped by token associations in training data regardless of which LLM is running.

For the practical naming system that implements field seeding, read: Unlock AI Character Behavior Through Effective Naming

Leave a comment