Context
The context component handles the full lifecycle of getting relevant information into the agent's token window. It ingests raw content, packs memories into a budget, and preserves facts before compaction.
ingest()
Process raw content into memories. Handles chunking, dedup, contradiction detection, and optional enrichment.
const result = await harness.context().ingest(content, {
scope: "session",
sourceType: "document",
})
// result.written — number of memories created
// result.skipped — number deduplicated
Behavior depends on the active profile's ingest settings:
| Setting | Description |
|---|---|
mode |
"session" (whole input), "chunk" (split into chunks), "turn-context" (conversation turns) |
chunkSize |
Max characters per chunk (default varies by profile) |
chunkOverlap |
Overlap between chunks |
enrich |
Whether to enrich chunks with surrounding context |
enrichMode |
"augment" (add context) or "rewrite" (rephrase with context) |
latentBridging |
Create latent semantic bridges between related chunks |
pack()
Assemble relevant memories into a token budget for the next LLM call:
const packed = await harness.context().pack(queryEmbedding, {
budgetTokens: 4000,
includeEdges: true,
})
Returns:
| Field | Type | Description |
|---|---|---|
text |
string |
Formatted context string ready for the prompt |
count |
number |
Number of memories included |
estimatedTokens |
number |
Estimated token count |
memories |
MemoryEntry[] |
The memories used |
The packer selects the most relevant memories that fit within the token budget, respecting the profile's context.budgetRatio and context.maxPackItems settings.
preserve()
Extract durable facts from conversation messages before they get compacted away:
await harness.context().preserve(messages, {
scope: "user",
})
Uses the configured extraction strategy:
| Strategy | Description |
|---|---|
rules |
Signal-word matching ("I learned", "remember that", "my name is"). Zero LLM calls. |
manual |
No automatic extraction — you call memory().write() yourself. |
llm |
Uses a configurable extractFn for LLM-based extraction. |
reconcile()
Background maintenance. Call periodically (or let the profile's autoReconcile handle it):
await harness.context().reconcile()
What it does:
- Promotes high-access memories (increases their retrieval priority)
- Merges near-duplicate memories
- Cleans orphaned relationship edges
- Runs at the interval set by
reconciliation.reconcileInterval(e.g., every 25 turns)