Skip to main content
The Agent class is the core primitive of OpenHarness. An agent wraps a language model, a set of tools, and a multi-step execution loop into a stateless executor that you can run() with a message history and new input.

Creating an Agent

import {
  Agent,
  createFsTools,
  createBashTool,
  NodeFsProvider,
  NodeShellProvider,
} from "@openharness/core";
import { openai } from "@ai-sdk/openai";

const fsTools = createFsTools(new NodeFsProvider());
const { bash } = createBashTool(new NodeShellProvider());

const agent = new Agent({
  name: "dev",
  model: openai("gpt-5.4"),
  systemPrompt: "You are a helpful coding assistant.",
  tools: { ...fsTools, bash },
  maxSteps: 20,
});

Running an Agent

agent.run() is an async generator that takes a message history and new input, and yields a stream of typed events as the agent works. The agent is stateless — it doesn’t accumulate messages internally. You pass the conversation history in and get the updated history back in the done event.
import type { ModelMessage } from "ai";

let messages: ModelMessage[] = [];

for await (const event of agent.run(messages, "Refactor the auth module to use JWTs")) {
  switch (event.type) {
    case "text.delta":
      process.stdout.write(event.text);
      break;
    case "tool.start":
      console.log(`Calling ${event.toolName}...`);
      break;
    case "tool.done":
      console.log(`${event.toolName} finished`);
      break;
    case "done":
      messages = event.messages; // capture updated history for next turn
      console.log(`Result: ${event.result}, tokens: ${event.totalUsage.totalTokens}`);
      break;
  }
}
This makes it easy to build multi-turn interactions — just pass the messages from the previous done event into the next run() call. It also means you have full control over the conversation history: you can inspect it, modify it, or share it between agents.

Events

The full set of events emitted by run():
EventDescription
text.deltaStreamed text chunk from the model
text.doneFull text for the current step is complete
reasoning.deltaStreamed reasoning/thinking chunk (if the model supports it)
reasoning.doneFull reasoning text for the step is complete
tool.startA tool call has been initiated
tool.doneA tool call completed successfully
tool.errorA tool call failed
step.startA new agentic step is starting
step.doneA step completed (includes token usage and finish reason)
errorAn error occurred during execution
doneThe agent has finished — result is one of "complete", "stopped", "max_steps", or "error"

Configuration

OptionDefaultDescription
name(required)Agent name, used in logging and subagent selection
model(required)Any Vercel AI SDK LanguageModel
systemPromptSystem prompt prepended to every request
toolsAI SDK ToolSet — the tools the agent can call
maxSteps100Maximum agentic steps before stopping
temperatureSampling temperature
maxTokensMax output tokens per step
instructionstrueWhether to load AGENTS.md / CLAUDE.md from the project directory
approveCallback for tool call approval
subagentsChild agents available via the task tool (see Subagents)
maxSubagentDepth1Maximum nesting depth for subagents
subagentBackgroundEnable background subagent execution
mcpServersMCP servers to connect to
skillsSkills configuration

Multi-Turn Conversations

Since the agent is stateless, multi-turn is achieved by passing the updated messages back in:
let messages: ModelMessage[] = [];

// Turn 1
for await (const event of agent.run(messages, "List all files")) {
  if (event.type === "done") messages = event.messages;
}

// Turn 2 — agent sees the full history
for await (const event of agent.run(messages, "Now read the largest file")) {
  if (event.type === "done") messages = event.messages;
}
For automatic history management, compaction, and retry, use a Session or Conversation.

Cleanup

If the agent uses MCP servers or background subagents, call close() when done:
await agent.close();