Home/Tools/@tpmjs/tools-hllm

executeTopology

@tpmjs/tools-hllm

Execute a topology with streaming SSE response. Supports various topology types including single, sequential, parallel, map-reduce, and more.

Official
agent
v0.1.5
MIT

Interactive Playground

Test @tpmjs/tools-hllm (executeTopology) with AI-powered execution

0/2000 characters

Installation & Usage

Install this tool and use it with the AI SDK

1. Install the package

npm install @tpmjs/tools-hllm
pnpm add @tpmjs/tools-hllm
yarn add @tpmjs/tools-hllm
bun add @tpmjs/tools-hllm
deno add npm:@tpmjs/tools-hllm

2. Import the tool

import { executeTopology } from '@tpmjs/tools-hllm';

3. Use with AI SDK

import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { executeTopology } from '@tpmjs/tools-hllm';

const result = await generateText({
  model: openai('gpt-4o'),
  tools: { executeTopology },
  prompt: 'Your prompt here...',
});

console.log(result.text);

Signature

(intent: string, topologyId: string, config?: Record<string, unknown>, sessionId?: string, conversationHistory?: { role: string; content: string }[]) => Promise<unknown>

Tags

agent
ai
execute
hllm
including
llm
response
sse
streaming
supports
topology
tpmjs
types
various

Parameters

Available configuration options

Auto-extracted
topologyId
Required
Type: string

Type of topology to execute.

intent
Required
Type: string

The task or prompt for execution (1-50,000 characters).

config
Optional
Type: object

Topology-specific configuration (e.g., maxIterations for reflection).

sessionId
Optional
Type: string

Associate execution with a chat session.

conversationHistory
Optional
Type: array

Previous messages for context continuity.

Schema extracted: 3/3/2026, 4:21:19 AM

README

ERROR: No README data found!

Statistics

Downloads/month

167

GitHub Stars

0

Quality Score

79%

Bundle Size

NPM Keywords

tpmjs
hllm
ai
llm
topology
agent

Maintainers

thomasdavis(thomasalwyndavis@gmail.com)

Frameworks

vercel-ai
executeTopology | TPMJS | TPMJS