research
/agent

research.actor

Every new agent session wastes tokens re-reading your codebase.
With research.actor you run a full research agent once per git commit, cache its analysis, and return it instantly to other agents. Saves time and tokens. Used as a baseline for deeper research.

Get codebase context in your agent in one command

$
WITHOUT RESEARCH.ACTOR
WITH RESEARCH.ACTOR

Installation & Quickstart

Install globally
$ npm install -g research.actor
Install the skill to your AI agent
$ research skill --install
Run analysis with your preferred harness
$ research --harness claude
Ask about specific changes (never cached)
$ research --prompt "explain the auth system"
Read the full documentation →

Features

One analysis per commit, served instantly
The first run on a commit invokes your AI harness to explore the codebase. Every subsequent call returns it from cache instantly. Switch commits and the right cache entry loads automatically.
Organic diff analysis
Working changes are discovered organically via git tools, producing richer insight than a raw patch.
Use any LLM harness
Supports opencode, claude, codex, aider, and gemini out of the box. Implement HarnessRunner for custom agents.
Pluggable cache store
FsStore for filesystem, MemoryStore for in-process, or implement CacheStore for Redis, S3, or databases.
Full TypeScript SDK
Everything is exported. Use analyze() directly, wire up custom stores and runners.
Cache expiry with maxAge
Treat entries older than a given duration as stale. Automatic re-analysis when needed.
Outside repo cache
Cache stored in ~/.cache/research/ so agents don't accidentally read cache files.

How it works

research — First Run

Perform a full analysis on first run

When you start a fresh agent conversation, research invokes your AI harness to explore and analyze the codebase. This takes some time as the agent discovers structure, dependencies, and key files. The analysis is then cached based on the git hash / branch state.

~3-5 seconds

Subsequent llm threads can use the cached context and ask questions

Subsequent calls return the cached analysis instantly. Working changes are discovered via git tools and merged with the cached base. Ask multiple questions with no waiting.

<100ms
research — Cache Hit

The analysis is cached to ~/.cache/research/ keyed by git commit hash. Switch branches or commits and the correct cache entry loads automatically. Cache entries can expire with --max-age for automatic re-analysis.

How it works in detail →

Without research.actor

Without caching, every agent conversation starts from scratch. Each task triggers a full codebase exploration, burning tokens and time on every request.

Agent Thread #1
Agent Thread #2
Agent Thread #3

With research.actor

With caching, agents get instant context and can ask targeted questions. The same tasks complete faster with minimal token usage.

Agent Thread #1
Agent Thread #2
Agent Thread #3

Integrate into your application using the TypeScript SDK

Install the SDK
$ npm install @research/core
import { analyze, FsStore } from "@research/core"
import type { HarnessRunner, RunRequest, RunResult } from "@research/core"
// Custom in-process runner — no subprocess needed
class MyAgentRunner implements HarnessRunner {
readonly name = "my-agent"
async run(req: RunRequest): Promise<RunResult> {
const output = await myAgent.query(req.prompt, { cwd: req.cwd })
return { output }
}
}
// Persistent cache + custom runner
const result = await analyze({
runner: new MyAgentRunner(),
store: new FsStore(),
prompt: "what changed in the auth layer?",
maxAge: 2 * 60 * 60 * 1000, // 2 hours
})
console.log(result.analysis)
console.log(result.fromCache) // true if base was cached
SDK reference →

Ready to cache your codebase?

Privacy Policy

This is a static website. No personal data is collected or stored.

We only collect anonymized page view data through:

• A self-hosted Plausible Analytics instance (privacy-focused, no cookies, no personal data)

Cloudflare Analytics (anonymized visitor statistics only)

No cookies are used. No personal information is tracked or shared.

Responsible for content: Lukas Mateffy