AI Processing Nodes
Classify, Summarize, Extract & Generate
The Node Pattern
A processing node is a single unit of work in a workflow. It takes an input (an event and context from previous nodes), does something (classify, summarize, extract, generate), and produces a structured result.
Every node follows the same contract:
interface NodeResult {
nodeId: string;
status: "pending" | "running" | "completed" | "failed" | "skipped";
output: Record<string, unknown>;
durationMs: number;
tokensUsed?: number;
error?: string;
}This consistency is what makes the system composable. The workflow engine doesn't care whether a node uses regex matching or a Claude API call — it only cares about the result shape.
The Template Method
The BaseNode class implements the template method pattern: the base class defines the execution skeleton, subclasses provide the processing logic.
abstract class BaseNode {
async execute(input: NodeInput): Promise<NodeResult> {
const start = Date.now();
try {
const output = await this.process(input);
return { nodeId: this.id, status: "completed", output, durationMs: Date.now() - start };
} catch (err) {
return { nodeId: this.id, status: "failed", output: {}, durationMs: Date.now() - start, error: err.message };
}
}
protected abstract process(input: NodeInput): Promise<Record<string, unknown>>;
}Every node gets automatic timing and error isolation for free. If a node throws, the workflow engine sees a structured failure — not a crash.
Four Node Types
Classifier — Assigns category (billing, technical, account, feature request), urgency (low through critical), and sentiment (positive, neutral, negative). The current implementation uses keyword matching; replacing it with a Claude call would increase accuracy from ~70% to ~95%.
Summarizer — Produces a 1-2 sentence summary suitable for Slack notifications and dashboard cards. Combines customer name, subject, and body preview into a concise string.
Extractor — Pulls structured data from unstructured text using regex patterns: email addresses, URLs, ticket references (TICK-1234), monetary amounts ($49.99), dates (2024-03-15), and product mentions. Returns deduplicated results as structured arrays.
Generator — The final node. Reads outputs from all three upstream nodes and produces a draft response (using category-specific templates), a routing decision (billing-team, engineering-team, etc.), an SLA commitment, and a list of suggested actions.
Data Flow
Nodes connect through previousResults — a map of nodeId to NodeResult that the workflow engine passes to each node:
// In the Generator node:
const classification = previousResults["classifier"]?.output;
const summary = previousResults["summarizer"]?.output;
const extraction = previousResults["extractor"]?.output;The Generator doesn't call the Classifier directly. It reads the Classifier's output from the shared results map. This decoupling means you can replace the Classifier implementation without touching the Generator.
AI-Ready Design
The nodes use keyword matching now, but they're structured for an easy swap to AI:
process() method is the only thing that changesTo upgrade the Classifier to use Claude:
protected async process(input: NodeInput) {
const response = await anthropic.messages.create({
model: "claude-3-haiku-20240307",
messages: [{ role: "user", content: `Classify this ticket: ${input.event.payload.body}` }],
});
return JSON.parse(response.content[0].text);
}The rest of the system — engine, reliability, dashboard — doesn't change at all.
What's Next
In Module 3, you'll connect these nodes into a workflow using a DAG executor. The classifier, summarizer, and extractor will run in parallel (they're independent), and the generator will wait for all three before producing its output.
This is chapter 2 of AI Workflow Automation.
Get the full hands-on course — free during early access. Build the complete system. Your projects become your portfolio.
View course details