Intent Systems logoIntent Systems

The AI Power Equation

Four factors that multiply together to determine your effectiveness with AI. Understand these, and you can 10-100x the progress you achieve.

A quantitative hedge fund needed to migrate their entire data pipeline. Hundreds of mission-critical features. 50+ ingestion scripts scattered across different job schedules, filesystems, and data formats. The kind of sprawling legacy system where one wrong number means real money lost.

Their engineer was already using Codex. Capable model, decent tools. But the project was estimated at 140 hours of focused engineering time — 2-3 months of careful work.

I came in, built an Intent Layer over their codebase, then coached their engineer through the skills in this article — in the correct sequence.

10 hours of paired sessions later: Complete migration. 14 data sources. 197 out of 203 columns verified at parity. Plus an operational dashboard they didn't even ask for.

Same engineer. Same tools. 14x faster.

What made the difference? Not the model — they already had a good one. Not raw coding ability — their engineer was competent. The difference was understanding which factors actually multiply your AI power, and developing them in the right order.

The Mental Model

Imagine a vast, high-dimensional space representing every possible state of your project. Somewhere in that space is your current state. Somewhere else is your goal. The line connecting them is your Intent Vector — the arc of progress you want to make.

When you launch an AI agent, you're sending it on a journey through this space:

  • Where it starts — determined by the context you provide (your codebase, your situation)
  • Which direction it goes — determined by how well you communicate your intent
  • How long it runs — determined by whether it can measure its own progress and chain sessions
  • What it's capable of — determined by your model's intelligence, speed, and trust settings

If you align the agent well, it travels straight along your intent vector and arrives at your goal. If alignment is weak, it drifts — careening off course, bouncing around, or gradually wandering as context rots[1]. (More on drift in the Alignment section below.)

Throughput enters when you launch multiple agents in parallel — each charting its own arc through goal space — and maximize your total productive agent-hours.

The Equation

Capability×Alignment×Duration×Throughput=AI Power
The AI Power Equation
  • Capability = the agent's raw horsepower
  • Alignment = how tightly bound to your intent vector
  • Duration = how far it can travel before stopping
  • Throughput = your total productive agent-hours

Four factors that multiply together. Understand these, and you can 10-100x the progress you achieve.

The Hierarchy

Each dimension is composed of factors that determine its magnitude. Factors are shaped by mechanisms — concepts that interact to determine how the factor works. And techniques are learnable practices that improve those mechanisms.

Dimension → Factor → Mechanism → Technique
LevelWhat It IsHow It InteractsExample
DimensionTop-level multiplierDimensions multiply togetherDuration
FactorWhat determines dimension magnitudeFactors multiply within a dimensionPersistence
MechanismConcepts that shape how a factor worksMechanisms interact (min, upper bounds, etc.)Iteration
TechniqueLearnable practicesTechniques improve mechanismsRalph Wiggum Pattern

Key insight: Factors multiply. Mechanisms interact in various ways — some multiply, some create upper bounds, some use min(). This distinction matters for knowing where to invest.

The Capability Trap

People think all of AI's power comes from the model. They try ChatGPT in a browser, watch it fumble without context, and conclude "AI just isn't there yet."

I heard this dozens of times from the quant firm before they let me help. They were certain AI couldn't handle their complexity. Ten hours later, we'd migrated their entire data pipeline.

What changed wasn't the model — they already had a good one. What changed was alignment. We gave the agent reach into their codebase, built context infrastructure, and taught their engineer how to communicate intent effectively.

New models ARE amazing. But if you ignore the other factors — especially alignment — you're going to have a woefully outdated understanding of what AI can actually do. A Ferrari with no steering wheel is still just an expensive way to crash.

Skipping Prerequisites

Duration techniques and multi-agent orchestration are NOT marketing hype. They work. They provide massive leverage. But only if you're ready to handle them.

Skip to Duration without Alignment: You see posts about agents running for hours. You try to replicate it. But your agent just runs for hours in the wrong direction. Longer isn't better if alignment is broken. You come back to a mess.

Skip to Throughput without Alignment or Duration: You fire up multiple agents without context infrastructure or verification. Even a single unaligned agent can spray slop all over your goals — this is slopapalooza[2]. Or worse: you have aligned agents but didn't intelligently coordinate, so they start knocking into each other. Destructive interference can knock both agents out of alignment, compounding the chaos.

The Path

The factors build on each other. This is the order to develop them:

  1. Capability is table stakes — The models are incredible now. Pick one, give it appropriate trust, move on. Don't get stuck here thinking AI "isn't there yet."

  2. Focus on Alignment — This is where most people are stuck, and where most learnable skills live. Put intelligence where your data lives. Build context infrastructure. Learn to explore before committing. Get tight binding to your intent vector. This is probably where you should invest.

  3. Add Duration — Build a verification harness for your domain. Learn chaining. Get agents running long enough that you can context-switch to other work.

  4. Then Throughput — Only now. You need aligned, persistent agents before parallelization makes sense.

Why Multiplication Matters

Consider someone with:

  • Capability: 10 (bleeding edge model)
  • Alignment: 2 (minimal context)
  • Duration: 1 (no verification)
  • Throughput: 1 (one agent)

Their power is 10 × 2 × 1 × 1 = 20x

Now compare the ROI of different improvements:

ImprovementNew PowerGainEffort Required
Capability 10 → 1224x+20%Very hard (diminishing returns at the frontier)
Alignment 2 → 550x+150%Learnable (build context infrastructure)
Duration 1 → 360x+200%Buildable (create verification harness)
Throughput 1 → 360x+200%Only works if Duration is solved first

The math is brutal: Grinding from 10→12 in your strongest factor takes more effort and gives less return than going 1→3 in your weakest.

Raise your floor, not your ceiling.

Most people haven't seriously thought about context infrastructure, exploration, verification, or multi-agent coordination. They're running on defaults. Which means there's massive low-hanging fruit for those willing to learn.

The optimal strategy: Identify your weakest factor. Improve it until another factor becomes the constraint. Repeat.

Now let's dive into each factor.

Capability

The raw power available to you.

Capability is determined by four mechanisms:

Intelligence×Speed×Trust÷Cost=Capability
Capability breakdown

Intelligence

The underlying pattern extraction ability: Claude Opus 4.5, GPT-5.2 High, Gemini, etc.

At its core, a model is a pattern extraction engine. Given information about a situation, how many useful patterns can it identify? How complex a problem can it solve?

Think of goalspace as terrain. Some paths are paved roads — easy to traverse. Others are storming volcanic mountains with lightning. Intelligence determines what difficulty of terrain the agent can cross.

There are many paths to your goal because goalspace is vast. But sometimes the optimal or most direct path requires complex patterns to be extracted and generalized robustly, which might not be possible with the current generation of model. A persistent agent might instead "build a boat and sail around the mountain" — takes longer, but still arrives.

The probabilistic nature of intelligence: Due to the high-dimensional space and probabilistic nature of LLMs, there's inherent stochasticity. At some intelligence level, you might solve a given problem 1/100 times. At a higher level, 50/100. Even higher, 99/100. As intelligence increases and pattern extraction improves, models converge onto correct solutions more reliably. Reliability is downstream of intelligence, not a separate mechanism.

Model selection is the primary technique for improving Intelligence. Choose the right model for your task — sometimes that's the frontier model, sometimes it's a smaller model that's faster and cheaper.

Speed

How fast the agent can think. Measured in tokens per second.

Speed determines how quickly work gets done. A faster agent completes tasks in less wall-clock time, enabling tighter feedback loops and more iterations.

Techniques for improving Speed:

  • Model selection — Smaller models are often faster
  • Provider selection — Different providers have different inference speeds
  • Token efficiency — Fewer tokens = faster responses (also reduces cost)

Trust

What actions are possible. Trust has two dimensions that together determine the action space:

1. What YOU let the model do (Permissions)

  • YOLO mode vs locked down
  • Fear constrains capability
  • You can't leverage what you won't let the agent do

2. What the MODEL lets you do (Guardrails)

  • Some models refuse certain requests
  • Open models have fewer restrictions
  • Locked-down models constrain your action space regardless of your permissions

The interaction: Your effective action space is the intersection of what you allow AND what the model will comply with.

Key insight: Trust is earned through alignment. As you get better at aligning agents, you can safely give them more permissions, unlocking more capability.

Cost

The economic constraint. Measured in dollars per million tokens ($/Mtok).

Cost is in the denominator because higher cost reduces effective capability — you can do less with the same budget.

Techniques for managing Cost:

  • Model selection — Frontier models cost more, smaller models cost less
  • Token efficiency — Fewer tokens = lower cost
  • Provider selection — Prices vary across providers
  • Caching — Reuse responses where possible

Key insight: Cost matters because it constrains how much you can run. An expensive model you can only afford to run occasionally has less effective capability than a cheaper model you can run constantly.

Capability is table stakes now. With Opus 4.5, GPT-5.2 High, and other frontier models widely available, everyone has access to incredible intelligence. The models are there. The speed is there. The cost is manageable. Trust is the one lever you control directly.

Most people get stuck at Capability — they try a model, watch it fumble without context, and conclude "AI isn't there yet." But the model IS there. The bottleneck is elsewhere.

Alignment

How tightly bound the agent is to your intent vector.

This is where most people have their biggest gap. They use ChatGPT in the browser, watch it struggle without context, and conclude AI can't help them. But they're the bottleneck — not the AI.

Think about it: when you go to intelligence at the store (ChatGPT in a browser), it has no idea what's in your house. It only knows what you bring to it. You become the bottleneck, manually copying and pasting context, explaining your situation from scratch every time.

The unlock: put the intelligence where your data lives. Install it at home. Let it look around, explore your files, understand your world. Remove yourself as the bottleneck.

Once you give the AI reach into your world, alignment improves dramatically. It can understand your intent because it can see what you're working with.

Alignment is determined by two mechanisms that multiply together:

Context×Intent=Alignment
Alignment breakdown

Two mechanisms: where you are and where you're going.

This is naturally where most learnable skills live. Why? Because this factor involves the most human elements:

  • Clarifying and understanding your own intent
  • Developing theory of mind for the agent
  • Recognizing what information you use — consciously or subconsciously — to make decisions
  • Ensuring the agent has access to that same understanding

Setting up your context ecosystem in a flexible, powerful way. Delivering the right context and intent signals through stream of consciousness, file/snippet/folder injection, examples, screenshots. Compounding your engineering by ensuring learnings get captured for future agents[3].

Context: Where You Are

It’s not about you knowing where you are. You already know. The issue is that the agent is effectively an amnesiac — each new chat starts with very little of your hard-won context.

Context is about making the agent effective in your world. I’ve found the cleanest framing is:

Reach×Understanding=Context
Context breakdown (the clean version)
  • Reach: the size of the world the agent can access (files, systems, artifacts — including what's in your head once you write it down).
  • Understanding: the quality of the agent's mental map of that world (how it's organized, what matters, how things connect).

You want to maximize both. Reach without understanding = the agent can see everything but doesn’t know what it’s looking at. Understanding without reach = the agent has a map of a world it can’t access.

Intent: Where You're Going

How well you convey your desired vector through goalspace.

The agent needs to understand what you want to accomplish — not just the immediate task, but the underlying goal, the constraints, what "done" looks like.

Intent has two mechanisms that create an upper bound on each other:

Discovery (figure out what you want)

You might not know your ideal intent vector until you explore. Discovery is the process of figuring out what you actually want.

Discovery has TWO parts:

  1. Understanding what YOU want — your own goals, needs, desires
  2. Understanding what the AGENT is capable of — what's even possible

If your mental model of the agent's capability is too constrained, you constrain your own imagination. Learned helplessness: you don't ask for things you don't know are possible.

  • Inject Uncertainty — Express doubt to trigger critical thinking, not blind execution. "I'm not sure if X is the right approach..." opens exploration.
  • Exploration Mode — Shift from "do this" to "help me think about this." Use the agent to sample nearby goalspace and discover what's possible.
  • Iterate on Output — Treat every response as a draft. Refine until you're certain the agent truly understands your intent.

Specification (communicate what you want)

Once you've discovered what you want, you need to communicate it to the agent in a way it can understand and execute.

  • Dream Outcome Definition — Write down what "done" looks like in vivid detail. Often emerges from Discovery. The clearer your vision, the better the agent can aim.
  • Pre-Flight Planning — Have the agent ask clarifying questions, walk through the approach, envision side effects. Pull issues left before you start. Models are great at predicting what will go wrong.
  • Context Pack — For tricky tasks, bundle the most relevant reference docs into a focused set. Critical for Throughput/coordination — each lane needs its own context pack.

The interaction: You can't specify what you haven't discovered. And discovery without specification is just daydreaming. Intent = min(Discovery, Specification).

Key insight: Agents, like life, reward the specific ask and punish the vague wish. Discovery helps you figure out what you want; specification makes it executable.

The Alignment Unlock

Once you're able to bind an agent to your intent reliably, you'll see them making meaningful progress for a while. They stay on your intent vector, doing useful work.

But then they stop. They ask: "Am I still on track? Is this what you wanted?"

They're RLHF'd to please you. Without feedback, they get uncertain and pause.

This is the ceiling of Alignment alone. To break through, you need Duration.

Duration

How long the agent can work toward your goal without stopping.

Duration is determined by Persistence — the agent's ability to keep working despite the forces that want to stop it.

Persistence=Duration
Duration breakdown

Persistence has two mechanisms that interact — your agent runs until the first bottleneck stops it:

min(Feedback, Iteration)=Persistence
Persistence breakdown

Agents stop for two reasons. Each mechanism solves one:

Failure ModeWhy It HappensMechanism
UncertaintyAgent doesn't know if it's on track. Stops to ask "Is this right?"Feedback — give it a way to measure progress
Context ExhaustionContext rots, fills up, or loses coherenceIteration — chain sessions with preserved state

Key insight: Whichever mechanism is weaker becomes your bottleneck. If Feedback = 8 and Iteration = 2, your effective Duration is 2. Fix the lower one first.

Feedback: Solving Uncertainty

Agents want to know if they're making progress. Without feedback, they stop and ask: "Am I doing this right?" This caps how long they can run.

Soft, vibey evaluations require human feedback. You become the bottleneck.

The solution: Can you codify how to measure distance from your goalstate such that the agent can iterate against it on its own?

Key technique: Verification Harness

A verification harness provides automated feedback the agent can use to know where it is in goalspace, understand the gradient, and know it's moving in the right direction:

  • Run tests → pass/fail
  • Check output against expected → match/mismatch
  • Validate constraints → satisfied/violated

With a verification harness, the agent can iterate until it passes — for hours, unsupervised.

What makes a good verification harness:

  • Clean — No side effects (can rerun, can parallelize)
  • Unambiguous — Clear pass/fail signal
  • Automatable — Agent can run it without you

Iteration: Solving Context Exhaustion

Even with perfect feedback, context windows are finite. Context rots over long sessions. The agent might have tests passing, but its "mental model" of the task degrades as the window fills with noise.

The insight: Instead of fighting context limits, work with them. Accept sessions are finite. Chain them together with preserved state.

Iteration techniques:

  1. Summarization (manual) — When context fills, have the agent dump a summary of progress and state. Start fresh session with the summary.
  2. Persistent State Files (semi-automatic) — Keep critical context in files that survive across sessions:
    • AGENT.md — Learnings about how to operate in this codebase
    • fix_plan.md — Priority-sorted TODO list
    • specs/ — Specifications that get "burned in" each session
  3. The Ralph Wiggum Pattern (fully automatic) — Run the agent in an infinite loop. Each iteration: fresh context, do one thing, run tests, commit on success, update state files, repeat. The loop IS the iteration mechanism.

Key insight: Iteration uses context/alignment techniques (checkpointing, persistent files) combined with looping to produce Duration. Neither checkpointing alone nor looping alone gets you there — it's the combination.

The Duration Unlock

You're ready for Throughput when your agents run long enough for you to context-switch and dispatch another agent along a different goal path.

  • Starcraft pro with 1-minute agents: Can juggle 5-6 by bouncing rapidly
  • Deep thinker with 2-hour agents: Can juggle 5-6 throughout the day

The threshold depends on you. But the unlock is the same: agents that run to completion without you.

Throughput

Your total productive agent-hours.

Throughput is determined by three mechanisms that multiply together:

Managing×Coordination×Accessibility=Throughput
Throughput breakdown

The key insight: It's not about "more agents" — it's about total productive agent-hours. 12 agents for 30 minutes might be less valuable than 1 agent running all day. Throughput measures the total.

Prerequisites

Throughput requires Alignment and Duration to be solved first:

  • Throughput without alignment = slop cannon (many agents going wrong directions)
  • Throughput without duration = babysitting (many agents all needing constant feedback)

Managing: Multiple Independent Goals

How many independent intent vectors can you maintain simultaneously?

Managing isn't just about how many goals you juggle — it's also about HOW you dispatch agents to those goals.

ModeTriggerExampleYour Presence Required?
ActiveYou explicitly dispatchTyping in Cursor, voice command, phone messageYes
ProactiveScheduled time eventCron job at 9am, "every Monday morning"No (after setup)
ReactiveExternal data/action eventEmail arrives, webhook fires, PR openedNo (after setup)

Key insight: As agent Duration increases, Managing gets easier.

  • If agents run 1 minute at a time, you need Starcraft-pro APM to keep 5-6 going
  • If agents run 2 hours at a time, a slow thinker can easily manage 5-6 throughout the day

Coordination: Multiple Agents on One Goal

How effectively can you decompose a single goal into non-interfering lanes for multiple agents?

This is the Mythical Man Month problem[4]. Adding more agents to a goal doesn't automatically help — it can make things worse.

The issue: destructive orbital interference.

When multiple agents work on vectors that are too close together in goalspace, their drift patterns overlap. They step on each other's work. They make conflicting changes. They undo each other's progress. The interference can knock both agents out of alignment.

Accessibility: Steering Uptime

What percentage of the day can you effectively steer your agents?

This is about the human interface. How available and present is your access to agents?

If you can only steer from your full workstation, that's maybe 20% of your day. From your phone, 30%. Voice assistant, 40%. Smart glasses, 50%. Neural interface, 100%.

Key insight: Accessibility multiplies your agent-hours. Same agents, same goals, but more hours of productive steering = more total output.

Visualizing Your Multiplier

Think of each factor as a score from 1-10. Try adjusting the sliders to see how your choices multiply:

Presets:
Capability5
1510

Model quality, speed, trust, cost efficiency

Alignment2
1510

Context richness × intent clarity

Duration1
1510

Verification harness, iteration loops

Throughput1
1510

Parallel agents, accessibility

5×2×1×1=
10
AI Power

Tip: Find your lowest factor — that's where you'll get the biggest gains. Going from 1→3 gives 3× improvement. Going from 8→10 only gives 1.25×.

Where to invest? Look at your lowest factor. Going from 1→5 in your weakest area gives 5x. Going from 5→6 in a strong area gives 1.2x.

ROI Comparison: Where to invest?

Capability
Alignment
Duration
Throughput
Current Power: 20
ImprovementNew PowerGainEffort
Capability(maxed)--
Very hard
diminishing returns at the frontier
Alignment2 → 550+150%
Learnable
build context infrastructure
Duration1 → 360+200%
Buildable
create verification harness
Throughput1 → 360+200%
Conditional
only works if Duration is solved

Key insight: Raising your floor beats raising your ceiling. Small improvements to weak factors yield massive gains.

What High Power Enables

As your multiplier grows, new capabilities become possible:

One-Shotting (Alignment + Duration)

Agent has the context, can verify its work, completes the task in one conversation. No back-and-forth, no babysitting. You kick it off and come back to finished work.

Multi-Agent One-Shot (Alignment + Duration + Throughput)

Many agents, all aligned, all with verification, all complete independently. This is what power looks like at high throughput.

Learning From the AI (High Alignment via Context + Discovery)

With enough context and discovery, the AI finds details about YOUR system that you didn't know. It spots patterns, catches edge cases, suggests improvements. The student becomes the teacher.

The Expanded Equation

At the top level:

Capability×Alignment×Duration×Throughput=AI Power

Expanding the factors:

(\frac{Intelligence×Speed×Trust}{Cost})×(Context×Intent)×Persistence×(Managing×Coordination×Accessibility)
Mechanisms expanded

Each mechanism is a lever. Small improvements compound across all of them.

Cutting Through the Noise

Your feed is full of "revolutionary" AI announcements. New model! New tools! New framework! Most of these are either marginal improvements to a single factor, or someone discovering a factor for the first time.

Once you understand all four factors and develop your skills accordingly, you can discern which announcements are genuine advances and which are distractions.

When Alignment is broken, a better model just means faster drift. When Duration is unsolved, a smarter agent still stops every 5 minutes for feedback. When Throughput is naive, you get slopapalooza instead of productive agent-hours.

The leverage isn't hiding in the latest frontier model. It's hiding in the factors most people haven't thought about.

Want help applying the AI Power Equation?

We run hands-on training that teaches these skills directly on your codebase—building real capability, not just awareness.